00:00:00.000 Started by upstream project "autotest-nightly" build number 4217 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3580 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.136 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.137 The recommended git tool is: git 00:00:00.137 using credential 00000000-0000-0000-0000-000000000002 00:00:00.139 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.206 Fetching changes from the remote Git repository 00:00:00.208 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.269 Using shallow fetch with depth 1 00:00:00.269 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.269 > git --version # timeout=10 00:00:00.304 > git --version # 'git version 2.39.2' 00:00:00.304 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.347 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.347 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.070 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.083 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.098 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:07.098 > git config core.sparsecheckout # timeout=10 00:00:07.110 > git read-tree -mu HEAD # timeout=10 00:00:07.126 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:07.156 Commit message: "packer: Fix typo in a package name" 00:00:07.156 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:07.271 [Pipeline] Start of Pipeline 00:00:07.287 [Pipeline] library 00:00:07.289 Loading library shm_lib@master 00:00:07.289 Library shm_lib@master is cached. Copying from home. 00:00:07.301 [Pipeline] node 00:00:07.313 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.315 [Pipeline] { 00:00:07.324 [Pipeline] catchError 00:00:07.326 [Pipeline] { 00:00:07.338 [Pipeline] wrap 00:00:07.347 [Pipeline] { 00:00:07.356 [Pipeline] stage 00:00:07.358 [Pipeline] { (Prologue) 00:00:07.377 [Pipeline] echo 00:00:07.379 Node: VM-host-SM38 00:00:07.389 [Pipeline] cleanWs 00:00:07.401 [WS-CLEANUP] Deleting project workspace... 00:00:07.401 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.408 [WS-CLEANUP] done 00:00:07.617 [Pipeline] setCustomBuildProperty 00:00:07.715 [Pipeline] httpRequest 00:00:08.041 [Pipeline] echo 00:00:08.043 Sorcerer 10.211.164.101 is alive 00:00:08.053 [Pipeline] retry 00:00:08.054 [Pipeline] { 00:00:08.068 [Pipeline] httpRequest 00:00:08.073 HttpMethod: GET 00:00:08.073 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:08.074 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:08.100 Response Code: HTTP/1.1 200 OK 00:00:08.100 Success: Status code 200 is in the accepted range: 200,404 00:00:08.101 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:33.849 [Pipeline] } 00:00:33.861 [Pipeline] // retry 00:00:33.867 [Pipeline] sh 00:00:34.164 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:34.181 [Pipeline] httpRequest 00:00:34.930 [Pipeline] echo 00:00:34.932 Sorcerer 10.211.164.101 is alive 00:00:34.941 [Pipeline] retry 00:00:34.943 [Pipeline] { 00:00:34.956 [Pipeline] httpRequest 00:00:34.960 HttpMethod: GET 00:00:34.961 URL: http://10.211.164.101/packages/spdk_169c3cd047cec29b3b1e206c9259a77f3e6a8077.tar.gz 00:00:34.961 Sending request to url: http://10.211.164.101/packages/spdk_169c3cd047cec29b3b1e206c9259a77f3e6a8077.tar.gz 00:00:34.978 Response Code: HTTP/1.1 200 OK 00:00:34.978 Success: Status code 200 is in the accepted range: 200,404 00:00:34.979 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_169c3cd047cec29b3b1e206c9259a77f3e6a8077.tar.gz 00:01:28.213 [Pipeline] } 00:01:28.231 [Pipeline] // retry 00:01:28.239 [Pipeline] sh 00:01:28.522 + tar --no-same-owner -xf spdk_169c3cd047cec29b3b1e206c9259a77f3e6a8077.tar.gz 00:01:31.082 [Pipeline] sh 00:01:31.368 + git -C spdk log --oneline -n5 00:01:31.368 169c3cd04 thread: set SPDK_CONFIG_MAX_NUMA_NODES to 1 if not defined 00:01:31.368 cab1decc1 thread: add NUMA node support to spdk_iobuf_put() 00:01:31.368 40c9acf6d env: add spdk_mem_get_numa_id 00:01:31.368 0f99ab2fa thread: allocate iobuf memory based on numa_id 00:01:31.368 2ef611c19 thread: update all iobuf non-get/put functions for multiple NUMA nodes 00:01:31.390 [Pipeline] writeFile 00:01:31.408 [Pipeline] sh 00:01:31.698 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:31.713 [Pipeline] sh 00:01:32.000 + cat autorun-spdk.conf 00:01:32.000 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.000 SPDK_TEST_NVME=1 00:01:32.000 SPDK_TEST_FTL=1 00:01:32.000 SPDK_TEST_ISAL=1 00:01:32.000 SPDK_RUN_ASAN=1 00:01:32.000 SPDK_RUN_UBSAN=1 00:01:32.000 SPDK_TEST_XNVME=1 00:01:32.000 SPDK_TEST_NVME_FDP=1 00:01:32.000 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.009 RUN_NIGHTLY=1 00:01:32.011 [Pipeline] } 00:01:32.025 [Pipeline] // stage 00:01:32.041 [Pipeline] stage 00:01:32.043 [Pipeline] { (Run VM) 00:01:32.057 [Pipeline] sh 00:01:32.343 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:32.343 + echo 'Start stage prepare_nvme.sh' 00:01:32.343 Start stage prepare_nvme.sh 00:01:32.343 + [[ -n 2 ]] 00:01:32.343 + disk_prefix=ex2 00:01:32.343 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:32.343 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:32.343 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:32.343 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.343 ++ SPDK_TEST_NVME=1 00:01:32.343 ++ SPDK_TEST_FTL=1 00:01:32.343 ++ SPDK_TEST_ISAL=1 00:01:32.343 ++ SPDK_RUN_ASAN=1 00:01:32.343 ++ SPDK_RUN_UBSAN=1 00:01:32.343 ++ SPDK_TEST_XNVME=1 00:01:32.343 ++ SPDK_TEST_NVME_FDP=1 00:01:32.343 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.343 ++ RUN_NIGHTLY=1 00:01:32.343 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:32.343 + nvme_files=() 00:01:32.343 + declare -A nvme_files 00:01:32.343 + backend_dir=/var/lib/libvirt/images/backends 00:01:32.343 + nvme_files['nvme.img']=5G 00:01:32.343 + nvme_files['nvme-cmb.img']=5G 00:01:32.343 + nvme_files['nvme-multi0.img']=4G 00:01:32.343 + nvme_files['nvme-multi1.img']=4G 00:01:32.343 + nvme_files['nvme-multi2.img']=4G 00:01:32.343 + nvme_files['nvme-openstack.img']=8G 00:01:32.343 + nvme_files['nvme-zns.img']=5G 00:01:32.343 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:32.343 + (( SPDK_TEST_FTL == 1 )) 00:01:32.343 + nvme_files["nvme-ftl.img"]=6G 00:01:32.343 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:32.343 + nvme_files["nvme-fdp.img"]=1G 00:01:32.343 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:32.343 + for nvme in "${!nvme_files[@]}" 00:01:32.344 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:32.344 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.344 + for nvme in "${!nvme_files[@]}" 00:01:32.344 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:32.916 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:32.916 + for nvme in "${!nvme_files[@]}" 00:01:32.916 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:33.176 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:33.176 + for nvme in "${!nvme_files[@]}" 00:01:33.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:33.176 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:33.176 + for nvme in "${!nvme_files[@]}" 00:01:33.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:33.176 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:33.176 + for nvme in "${!nvme_files[@]}" 00:01:33.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:33.176 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:33.176 + for nvme in "${!nvme_files[@]}" 00:01:33.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:33.176 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:33.176 + for nvme in "${!nvme_files[@]}" 00:01:33.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:33.436 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:33.436 + for nvme in "${!nvme_files[@]}" 00:01:33.436 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:34.007 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:34.007 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:34.007 + echo 'End stage prepare_nvme.sh' 00:01:34.007 End stage prepare_nvme.sh 00:01:34.020 [Pipeline] sh 00:01:34.303 + DISTRO=fedora39 00:01:34.303 + CPUS=10 00:01:34.303 + RAM=12288 00:01:34.303 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:34.303 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:34.303 00:01:34.303 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:34.303 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:34.303 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:34.303 HELP=0 00:01:34.303 DRY_RUN=0 00:01:34.303 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:34.303 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:34.303 NVME_AUTO_CREATE=0 00:01:34.303 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:34.303 NVME_CMB=,,,, 00:01:34.303 NVME_PMR=,,,, 00:01:34.303 NVME_ZNS=,,,, 00:01:34.303 NVME_MS=true,,,, 00:01:34.303 NVME_FDP=,,,on, 00:01:34.303 SPDK_VAGRANT_DISTRO=fedora39 00:01:34.303 SPDK_VAGRANT_VMCPU=10 00:01:34.303 SPDK_VAGRANT_VMRAM=12288 00:01:34.303 SPDK_VAGRANT_PROVIDER=libvirt 00:01:34.304 SPDK_VAGRANT_HTTP_PROXY= 00:01:34.304 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:34.304 SPDK_OPENSTACK_NETWORK=0 00:01:34.304 VAGRANT_PACKAGE_BOX=0 00:01:34.304 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:34.304 FORCE_DISTRO=true 00:01:34.304 VAGRANT_BOX_VERSION= 00:01:34.304 EXTRA_VAGRANTFILES= 00:01:34.304 NIC_MODEL=e1000 00:01:34.304 00:01:34.304 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:34.304 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:36.844 Bringing machine 'default' up with 'libvirt' provider... 00:01:37.105 ==> default: Creating image (snapshot of base box volume). 00:01:37.365 ==> default: Creating domain with the following settings... 00:01:37.365 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730027782_0e18ba6987afe0265d22 00:01:37.366 ==> default: -- Domain type: kvm 00:01:37.366 ==> default: -- Cpus: 10 00:01:37.366 ==> default: -- Feature: acpi 00:01:37.366 ==> default: -- Feature: apic 00:01:37.366 ==> default: -- Feature: pae 00:01:37.366 ==> default: -- Memory: 12288M 00:01:37.366 ==> default: -- Memory Backing: hugepages: 00:01:37.366 ==> default: -- Management MAC: 00:01:37.366 ==> default: -- Loader: 00:01:37.366 ==> default: -- Nvram: 00:01:37.366 ==> default: -- Base box: spdk/fedora39 00:01:37.366 ==> default: -- Storage pool: default 00:01:37.366 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730027782_0e18ba6987afe0265d22.img (20G) 00:01:37.366 ==> default: -- Volume Cache: default 00:01:37.366 ==> default: -- Kernel: 00:01:37.366 ==> default: -- Initrd: 00:01:37.366 ==> default: -- Graphics Type: vnc 00:01:37.366 ==> default: -- Graphics Port: -1 00:01:37.366 ==> default: -- Graphics IP: 127.0.0.1 00:01:37.366 ==> default: -- Graphics Password: Not defined 00:01:37.366 ==> default: -- Video Type: cirrus 00:01:37.366 ==> default: -- Video VRAM: 9216 00:01:37.366 ==> default: -- Sound Type: 00:01:37.366 ==> default: -- Keymap: en-us 00:01:37.366 ==> default: -- TPM Path: 00:01:37.366 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:37.366 ==> default: -- Command line args: 00:01:37.366 ==> default: -> value=-device, 00:01:37.366 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:37.366 ==> default: -> value=-drive, 00:01:37.366 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:37.366 ==> default: -> value=-device, 00:01:37.366 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:37.366 ==> default: -> value=-device, 00:01:37.366 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:37.366 ==> default: -> value=-drive, 00:01:37.366 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:37.366 ==> default: -> value=-device, 00:01:37.366 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.366 ==> default: -> value=-device, 00:01:37.366 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:37.366 ==> default: -> value=-drive, 00:01:37.366 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:37.366 ==> default: -> value=-device, 00:01:37.366 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.366 ==> default: -> value=-drive, 00:01:37.366 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:37.366 ==> default: -> value=-device, 00:01:37.366 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.366 ==> default: -> value=-drive, 00:01:37.366 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:37.366 ==> default: -> value=-device, 00:01:37.366 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.366 ==> default: -> value=-device, 00:01:37.366 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:37.366 ==> default: -> value=-device, 00:01:37.366 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:37.366 ==> default: -> value=-drive, 00:01:37.366 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:37.366 ==> default: -> value=-device, 00:01:37.366 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.627 ==> default: Creating shared folders metadata... 00:01:37.627 ==> default: Starting domain. 00:01:39.543 ==> default: Waiting for domain to get an IP address... 00:01:57.671 ==> default: Waiting for SSH to become available... 00:01:57.671 ==> default: Configuring and enabling network interfaces... 00:02:00.274 default: SSH address: 192.168.121.74:22 00:02:00.274 default: SSH username: vagrant 00:02:00.274 default: SSH auth method: private key 00:02:02.192 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:10.338 ==> default: Mounting SSHFS shared folder... 00:02:12.258 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:12.258 ==> default: Checking Mount.. 00:02:13.203 ==> default: Folder Successfully Mounted! 00:02:13.465 00:02:13.465 SUCCESS! 00:02:13.465 00:02:13.465 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:13.465 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:13.465 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:13.465 00:02:13.477 [Pipeline] } 00:02:13.491 [Pipeline] // stage 00:02:13.501 [Pipeline] dir 00:02:13.501 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:13.503 [Pipeline] { 00:02:13.515 [Pipeline] catchError 00:02:13.517 [Pipeline] { 00:02:13.530 [Pipeline] sh 00:02:13.815 + vagrant ssh-config --host vagrant 00:02:13.815 + sed -ne '/^Host/,$p' 00:02:13.815 + tee ssh_conf 00:02:16.361 Host vagrant 00:02:16.361 HostName 192.168.121.74 00:02:16.361 User vagrant 00:02:16.361 Port 22 00:02:16.361 UserKnownHostsFile /dev/null 00:02:16.361 StrictHostKeyChecking no 00:02:16.361 PasswordAuthentication no 00:02:16.361 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:16.361 IdentitiesOnly yes 00:02:16.361 LogLevel FATAL 00:02:16.362 ForwardAgent yes 00:02:16.362 ForwardX11 yes 00:02:16.362 00:02:16.377 [Pipeline] withEnv 00:02:16.379 [Pipeline] { 00:02:16.396 [Pipeline] sh 00:02:16.685 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:16.685 source /etc/os-release 00:02:16.685 [[ -e /image.version ]] && img=$(< /image.version) 00:02:16.685 # Minimal, systemd-like check. 00:02:16.685 if [[ -e /.dockerenv ]]; then 00:02:16.685 # Clear garbage from the node'\''s name: 00:02:16.685 # agt-er_autotest_547-896 -> autotest_547-896 00:02:16.685 # $HOSTNAME is the actual container id 00:02:16.685 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:16.685 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:16.685 # We can assume this is a mount from a host where container is running, 00:02:16.685 # so fetch its hostname to easily identify the target swarm worker. 00:02:16.685 container="$(< /etc/hostname) ($agent)" 00:02:16.685 else 00:02:16.685 # Fallback 00:02:16.685 container=$agent 00:02:16.685 fi 00:02:16.685 fi 00:02:16.685 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:16.685 ' 00:02:16.961 [Pipeline] } 00:02:16.978 [Pipeline] // withEnv 00:02:16.988 [Pipeline] setCustomBuildProperty 00:02:17.003 [Pipeline] stage 00:02:17.005 [Pipeline] { (Tests) 00:02:17.022 [Pipeline] sh 00:02:17.307 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:17.581 [Pipeline] sh 00:02:17.862 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:18.137 [Pipeline] timeout 00:02:18.137 Timeout set to expire in 50 min 00:02:18.139 [Pipeline] { 00:02:18.156 [Pipeline] sh 00:02:18.440 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:19.006 HEAD is now at 169c3cd04 thread: set SPDK_CONFIG_MAX_NUMA_NODES to 1 if not defined 00:02:19.020 [Pipeline] sh 00:02:19.312 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:19.618 [Pipeline] sh 00:02:19.902 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:19.919 [Pipeline] sh 00:02:20.201 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:20.202 ++ readlink -f spdk_repo 00:02:20.202 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:20.202 + [[ -n /home/vagrant/spdk_repo ]] 00:02:20.202 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:20.202 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:20.202 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:20.202 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:20.202 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:20.202 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:20.202 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:20.202 + cd /home/vagrant/spdk_repo 00:02:20.202 + source /etc/os-release 00:02:20.202 ++ NAME='Fedora Linux' 00:02:20.202 ++ VERSION='39 (Cloud Edition)' 00:02:20.202 ++ ID=fedora 00:02:20.202 ++ VERSION_ID=39 00:02:20.202 ++ VERSION_CODENAME= 00:02:20.202 ++ PLATFORM_ID=platform:f39 00:02:20.202 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:20.202 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:20.202 ++ LOGO=fedora-logo-icon 00:02:20.202 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:20.202 ++ HOME_URL=https://fedoraproject.org/ 00:02:20.202 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:20.202 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:20.202 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:20.202 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:20.202 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:20.202 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:20.202 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:20.202 ++ SUPPORT_END=2024-11-12 00:02:20.202 ++ VARIANT='Cloud Edition' 00:02:20.202 ++ VARIANT_ID=cloud 00:02:20.202 + uname -a 00:02:20.202 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:20.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:21.057 Hugepages 00:02:21.057 node hugesize free / total 00:02:21.057 node0 1048576kB 0 / 0 00:02:21.057 node0 2048kB 0 / 0 00:02:21.057 00:02:21.057 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:21.057 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:21.057 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:21.057 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:21.057 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:21.057 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:21.057 + rm -f /tmp/spdk-ld-path 00:02:21.057 + source autorun-spdk.conf 00:02:21.057 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.057 ++ SPDK_TEST_NVME=1 00:02:21.057 ++ SPDK_TEST_FTL=1 00:02:21.057 ++ SPDK_TEST_ISAL=1 00:02:21.057 ++ SPDK_RUN_ASAN=1 00:02:21.057 ++ SPDK_RUN_UBSAN=1 00:02:21.057 ++ SPDK_TEST_XNVME=1 00:02:21.057 ++ SPDK_TEST_NVME_FDP=1 00:02:21.057 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.057 ++ RUN_NIGHTLY=1 00:02:21.057 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:21.057 + [[ -n '' ]] 00:02:21.057 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:21.057 + for M in /var/spdk/build-*-manifest.txt 00:02:21.057 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:21.057 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.057 + for M in /var/spdk/build-*-manifest.txt 00:02:21.057 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:21.057 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.318 + for M in /var/spdk/build-*-manifest.txt 00:02:21.318 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:21.318 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.318 ++ uname 00:02:21.318 + [[ Linux == \L\i\n\u\x ]] 00:02:21.318 + sudo dmesg -T 00:02:21.318 + sudo dmesg --clear 00:02:21.318 + dmesg_pid=5039 00:02:21.318 + sudo dmesg -Tw 00:02:21.318 + [[ Fedora Linux == FreeBSD ]] 00:02:21.318 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.318 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.318 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:21.318 + [[ -x /usr/src/fio-static/fio ]] 00:02:21.318 + export FIO_BIN=/usr/src/fio-static/fio 00:02:21.318 + FIO_BIN=/usr/src/fio-static/fio 00:02:21.318 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:21.318 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:21.318 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:21.318 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.318 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.318 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:21.318 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.318 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.318 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.318 Test configuration: 00:02:21.318 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.318 SPDK_TEST_NVME=1 00:02:21.318 SPDK_TEST_FTL=1 00:02:21.318 SPDK_TEST_ISAL=1 00:02:21.318 SPDK_RUN_ASAN=1 00:02:21.318 SPDK_RUN_UBSAN=1 00:02:21.318 SPDK_TEST_XNVME=1 00:02:21.318 SPDK_TEST_NVME_FDP=1 00:02:21.318 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.318 RUN_NIGHTLY=1 11:17:06 -- common/autotest_common.sh@1688 -- $ [[ n == y ]] 00:02:21.318 11:17:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:21.318 11:17:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:21.318 11:17:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:21.318 11:17:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.318 11:17:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.318 11:17:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.318 11:17:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.318 11:17:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.318 11:17:06 -- paths/export.sh@5 -- $ export PATH 00:02:21.318 11:17:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.318 11:17:06 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:21.318 11:17:06 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:21.318 11:17:06 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730027826.XXXXXX 00:02:21.318 11:17:06 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730027826.nmB6ao 00:02:21.318 11:17:06 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:21.318 11:17:06 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:21.318 11:17:06 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:21.318 11:17:06 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:21.318 11:17:06 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:21.318 11:17:06 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:21.318 11:17:06 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:21.318 11:17:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.318 11:17:06 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:21.318 11:17:06 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:21.318 11:17:06 -- pm/common@17 -- $ local monitor 00:02:21.318 11:17:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.318 11:17:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:21.318 11:17:06 -- pm/common@25 -- $ sleep 1 00:02:21.318 11:17:06 -- pm/common@21 -- $ date +%s 00:02:21.318 11:17:06 -- pm/common@21 -- $ date +%s 00:02:21.318 11:17:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730027826 00:02:21.318 11:17:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730027826 00:02:21.318 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730027826_collect-cpu-load.pm.log 00:02:21.318 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730027826_collect-vmstat.pm.log 00:02:22.259 11:17:07 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:22.259 11:17:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:22.259 11:17:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:22.259 11:17:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:22.259 11:17:07 -- spdk/autobuild.sh@16 -- $ date -u 00:02:22.259 Sun Oct 27 11:17:07 AM UTC 2024 00:02:22.259 11:17:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:22.259 v25.01-pre-118-g169c3cd04 00:02:22.259 11:17:07 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:22.259 11:17:07 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:22.259 11:17:07 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:22.259 11:17:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:22.259 11:17:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.259 ************************************ 00:02:22.259 START TEST asan 00:02:22.259 ************************************ 00:02:22.259 using asan 00:02:22.259 11:17:07 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:22.259 00:02:22.259 real 0m0.000s 00:02:22.259 user 0m0.000s 00:02:22.259 sys 0m0.000s 00:02:22.259 11:17:07 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:22.259 11:17:07 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:22.259 ************************************ 00:02:22.259 END TEST asan 00:02:22.259 ************************************ 00:02:22.520 11:17:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:22.520 11:17:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:22.520 11:17:07 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:22.520 11:17:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:22.520 11:17:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.520 ************************************ 00:02:22.520 START TEST ubsan 00:02:22.520 ************************************ 00:02:22.520 using ubsan 00:02:22.520 11:17:07 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:22.520 00:02:22.520 real 0m0.000s 00:02:22.520 user 0m0.000s 00:02:22.520 sys 0m0.000s 00:02:22.520 11:17:07 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:22.520 11:17:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:22.520 ************************************ 00:02:22.520 END TEST ubsan 00:02:22.520 ************************************ 00:02:22.520 11:17:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:22.520 11:17:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:22.520 11:17:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:22.520 11:17:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:22.520 11:17:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:22.520 11:17:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:22.520 11:17:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:22.520 11:17:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:22.520 11:17:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:22.520 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:22.520 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:22.780 Using 'verbs' RDMA provider 00:02:33.697 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:43.689 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:43.689 Creating mk/config.mk...done. 00:02:43.689 Creating mk/cc.flags.mk...done. 00:02:43.689 Type 'make' to build. 00:02:43.689 11:17:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:43.689 11:17:28 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:43.689 11:17:28 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:43.689 11:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.689 ************************************ 00:02:43.689 START TEST make 00:02:43.689 ************************************ 00:02:43.689 11:17:28 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:43.689 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:43.689 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:43.689 meson setup builddir \ 00:02:43.689 -Dwith-libaio=enabled \ 00:02:43.689 -Dwith-liburing=enabled \ 00:02:43.689 -Dwith-libvfn=disabled \ 00:02:43.689 -Dwith-spdk=disabled \ 00:02:43.689 -Dexamples=false \ 00:02:43.689 -Dtests=false \ 00:02:43.689 -Dtools=false && \ 00:02:43.689 meson compile -C builddir && \ 00:02:43.689 cd -) 00:02:43.689 make[1]: Nothing to be done for 'all'. 00:02:45.589 The Meson build system 00:02:45.589 Version: 1.5.0 00:02:45.589 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:45.589 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:45.589 Build type: native build 00:02:45.589 Project name: xnvme 00:02:45.589 Project version: 0.7.5 00:02:45.589 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:45.589 C linker for the host machine: cc ld.bfd 2.40-14 00:02:45.589 Host machine cpu family: x86_64 00:02:45.589 Host machine cpu: x86_64 00:02:45.589 Message: host_machine.system: linux 00:02:45.589 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:45.589 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:45.589 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:45.589 Run-time dependency threads found: YES 00:02:45.589 Has header "setupapi.h" : NO 00:02:45.589 Has header "linux/blkzoned.h" : YES 00:02:45.589 Has header "linux/blkzoned.h" : YES (cached) 00:02:45.589 Has header "libaio.h" : YES 00:02:45.589 Library aio found: YES 00:02:45.589 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:45.589 Run-time dependency liburing found: YES 2.2 00:02:45.589 Dependency libvfn skipped: feature with-libvfn disabled 00:02:45.589 Found CMake: /usr/bin/cmake (3.27.7) 00:02:45.589 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:45.589 Subproject spdk : skipped: feature with-spdk disabled 00:02:45.589 Run-time dependency appleframeworks found: NO (tried framework) 00:02:45.589 Run-time dependency appleframeworks found: NO (tried framework) 00:02:45.589 Library rt found: YES 00:02:45.589 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:45.589 Configuring xnvme_config.h using configuration 00:02:45.589 Configuring xnvme.spec using configuration 00:02:45.589 Run-time dependency bash-completion found: YES 2.11 00:02:45.589 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:45.589 Program cp found: YES (/usr/bin/cp) 00:02:45.589 Build targets in project: 3 00:02:45.589 00:02:45.589 xnvme 0.7.5 00:02:45.589 00:02:45.589 Subprojects 00:02:45.589 spdk : NO Feature 'with-spdk' disabled 00:02:45.589 00:02:45.589 User defined options 00:02:45.589 examples : false 00:02:45.589 tests : false 00:02:45.589 tools : false 00:02:45.589 with-libaio : enabled 00:02:45.589 with-liburing: enabled 00:02:45.589 with-libvfn : disabled 00:02:45.589 with-spdk : disabled 00:02:45.589 00:02:45.589 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:45.848 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:45.848 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:46.106 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:46.106 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:46.106 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:46.106 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:46.106 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:46.106 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:46.106 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:46.106 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:46.106 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:46.106 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:46.106 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:46.106 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:46.106 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:46.106 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:46.106 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:46.106 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:46.106 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:46.106 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:46.106 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:46.106 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:46.106 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:46.106 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:46.106 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:46.106 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:46.106 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:46.365 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:46.365 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:46.365 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:46.365 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:46.365 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:46.365 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:46.365 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:46.365 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:46.365 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:46.365 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:46.365 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:46.365 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:46.365 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:46.365 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:46.365 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:46.365 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:46.365 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:46.365 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:46.365 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:46.365 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:46.365 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:46.365 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:46.365 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:46.365 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:46.365 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:46.365 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:46.365 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:46.365 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:46.365 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:46.365 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:46.365 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:46.365 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:46.365 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:46.365 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:46.623 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:46.623 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:46.623 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:46.623 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:46.623 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:46.623 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:46.623 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:46.623 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:46.623 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:46.623 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:46.623 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:46.623 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:46.623 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:46.882 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:47.140 [75/76] Linking static target lib/libxnvme.a 00:02:47.140 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:47.140 INFO: autodetecting backend as ninja 00:02:47.140 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:47.140 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:52.411 The Meson build system 00:02:52.411 Version: 1.5.0 00:02:52.411 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:52.411 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:52.411 Build type: native build 00:02:52.411 Program cat found: YES (/usr/bin/cat) 00:02:52.411 Project name: DPDK 00:02:52.411 Project version: 24.03.0 00:02:52.411 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.411 C linker for the host machine: cc ld.bfd 2.40-14 00:02:52.411 Host machine cpu family: x86_64 00:02:52.411 Host machine cpu: x86_64 00:02:52.411 Message: ## Building in Developer Mode ## 00:02:52.411 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:52.411 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:52.411 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:52.411 Program python3 found: YES (/usr/bin/python3) 00:02:52.411 Program cat found: YES (/usr/bin/cat) 00:02:52.411 Compiler for C supports arguments -march=native: YES 00:02:52.412 Checking for size of "void *" : 8 00:02:52.412 Checking for size of "void *" : 8 (cached) 00:02:52.412 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:52.412 Library m found: YES 00:02:52.412 Library numa found: YES 00:02:52.412 Has header "numaif.h" : YES 00:02:52.412 Library fdt found: NO 00:02:52.412 Library execinfo found: NO 00:02:52.412 Has header "execinfo.h" : YES 00:02:52.412 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.412 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:52.412 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:52.412 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:52.412 Run-time dependency openssl found: YES 3.1.1 00:02:52.412 Run-time dependency libpcap found: YES 1.10.4 00:02:52.412 Has header "pcap.h" with dependency libpcap: YES 00:02:52.412 Compiler for C supports arguments -Wcast-qual: YES 00:02:52.412 Compiler for C supports arguments -Wdeprecated: YES 00:02:52.412 Compiler for C supports arguments -Wformat: YES 00:02:52.412 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:52.412 Compiler for C supports arguments -Wformat-security: NO 00:02:52.412 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.412 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:52.412 Compiler for C supports arguments -Wnested-externs: YES 00:02:52.412 Compiler for C supports arguments -Wold-style-definition: YES 00:02:52.412 Compiler for C supports arguments -Wpointer-arith: YES 00:02:52.412 Compiler for C supports arguments -Wsign-compare: YES 00:02:52.412 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:52.412 Compiler for C supports arguments -Wundef: YES 00:02:52.412 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.412 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:52.412 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:52.412 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.412 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:52.412 Program objdump found: YES (/usr/bin/objdump) 00:02:52.412 Compiler for C supports arguments -mavx512f: YES 00:02:52.412 Checking if "AVX512 checking" compiles: YES 00:02:52.412 Fetching value of define "__SSE4_2__" : 1 00:02:52.412 Fetching value of define "__AES__" : 1 00:02:52.412 Fetching value of define "__AVX__" : 1 00:02:52.412 Fetching value of define "__AVX2__" : 1 00:02:52.412 Fetching value of define "__AVX512BW__" : 1 00:02:52.412 Fetching value of define "__AVX512CD__" : 1 00:02:52.412 Fetching value of define "__AVX512DQ__" : 1 00:02:52.412 Fetching value of define "__AVX512F__" : 1 00:02:52.412 Fetching value of define "__AVX512VL__" : 1 00:02:52.412 Fetching value of define "__PCLMUL__" : 1 00:02:52.412 Fetching value of define "__RDRND__" : 1 00:02:52.412 Fetching value of define "__RDSEED__" : 1 00:02:52.412 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:52.412 Fetching value of define "__znver1__" : (undefined) 00:02:52.412 Fetching value of define "__znver2__" : (undefined) 00:02:52.412 Fetching value of define "__znver3__" : (undefined) 00:02:52.412 Fetching value of define "__znver4__" : (undefined) 00:02:52.412 Library asan found: YES 00:02:52.412 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:52.412 Message: lib/log: Defining dependency "log" 00:02:52.412 Message: lib/kvargs: Defining dependency "kvargs" 00:02:52.412 Message: lib/telemetry: Defining dependency "telemetry" 00:02:52.412 Library rt found: YES 00:02:52.412 Checking for function "getentropy" : NO 00:02:52.412 Message: lib/eal: Defining dependency "eal" 00:02:52.412 Message: lib/ring: Defining dependency "ring" 00:02:52.412 Message: lib/rcu: Defining dependency "rcu" 00:02:52.412 Message: lib/mempool: Defining dependency "mempool" 00:02:52.412 Message: lib/mbuf: Defining dependency "mbuf" 00:02:52.412 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:52.412 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.412 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.412 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:52.412 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:52.412 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:52.412 Compiler for C supports arguments -mpclmul: YES 00:02:52.412 Compiler for C supports arguments -maes: YES 00:02:52.412 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.412 Compiler for C supports arguments -mavx512bw: YES 00:02:52.412 Compiler for C supports arguments -mavx512dq: YES 00:02:52.412 Compiler for C supports arguments -mavx512vl: YES 00:02:52.412 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:52.412 Compiler for C supports arguments -mavx2: YES 00:02:52.412 Compiler for C supports arguments -mavx: YES 00:02:52.412 Message: lib/net: Defining dependency "net" 00:02:52.412 Message: lib/meter: Defining dependency "meter" 00:02:52.412 Message: lib/ethdev: Defining dependency "ethdev" 00:02:52.412 Message: lib/pci: Defining dependency "pci" 00:02:52.412 Message: lib/cmdline: Defining dependency "cmdline" 00:02:52.412 Message: lib/hash: Defining dependency "hash" 00:02:52.412 Message: lib/timer: Defining dependency "timer" 00:02:52.412 Message: lib/compressdev: Defining dependency "compressdev" 00:02:52.412 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:52.412 Message: lib/dmadev: Defining dependency "dmadev" 00:02:52.412 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:52.412 Message: lib/power: Defining dependency "power" 00:02:52.412 Message: lib/reorder: Defining dependency "reorder" 00:02:52.412 Message: lib/security: Defining dependency "security" 00:02:52.412 Has header "linux/userfaultfd.h" : YES 00:02:52.412 Has header "linux/vduse.h" : YES 00:02:52.412 Message: lib/vhost: Defining dependency "vhost" 00:02:52.412 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:52.412 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.412 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.412 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.412 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:52.412 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:52.412 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:52.412 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:52.412 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:52.412 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:52.412 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:52.412 Configuring doxy-api-html.conf using configuration 00:02:52.412 Configuring doxy-api-man.conf using configuration 00:02:52.412 Program mandb found: YES (/usr/bin/mandb) 00:02:52.412 Program sphinx-build found: NO 00:02:52.412 Configuring rte_build_config.h using configuration 00:02:52.412 Message: 00:02:52.412 ================= 00:02:52.412 Applications Enabled 00:02:52.412 ================= 00:02:52.412 00:02:52.412 apps: 00:02:52.412 00:02:52.412 00:02:52.412 Message: 00:02:52.412 ================= 00:02:52.412 Libraries Enabled 00:02:52.412 ================= 00:02:52.412 00:02:52.412 libs: 00:02:52.412 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:52.412 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:52.412 cryptodev, dmadev, power, reorder, security, vhost, 00:02:52.412 00:02:52.412 Message: 00:02:52.412 =============== 00:02:52.412 Drivers Enabled 00:02:52.412 =============== 00:02:52.412 00:02:52.412 common: 00:02:52.412 00:02:52.412 bus: 00:02:52.412 pci, vdev, 00:02:52.412 mempool: 00:02:52.412 ring, 00:02:52.412 dma: 00:02:52.412 00:02:52.412 net: 00:02:52.412 00:02:52.412 crypto: 00:02:52.412 00:02:52.412 compress: 00:02:52.412 00:02:52.412 vdpa: 00:02:52.412 00:02:52.412 00:02:52.412 Message: 00:02:52.412 ================= 00:02:52.412 Content Skipped 00:02:52.412 ================= 00:02:52.412 00:02:52.412 apps: 00:02:52.412 dumpcap: explicitly disabled via build config 00:02:52.412 graph: explicitly disabled via build config 00:02:52.412 pdump: explicitly disabled via build config 00:02:52.412 proc-info: explicitly disabled via build config 00:02:52.412 test-acl: explicitly disabled via build config 00:02:52.412 test-bbdev: explicitly disabled via build config 00:02:52.412 test-cmdline: explicitly disabled via build config 00:02:52.412 test-compress-perf: explicitly disabled via build config 00:02:52.412 test-crypto-perf: explicitly disabled via build config 00:02:52.412 test-dma-perf: explicitly disabled via build config 00:02:52.412 test-eventdev: explicitly disabled via build config 00:02:52.412 test-fib: explicitly disabled via build config 00:02:52.412 test-flow-perf: explicitly disabled via build config 00:02:52.412 test-gpudev: explicitly disabled via build config 00:02:52.412 test-mldev: explicitly disabled via build config 00:02:52.412 test-pipeline: explicitly disabled via build config 00:02:52.412 test-pmd: explicitly disabled via build config 00:02:52.412 test-regex: explicitly disabled via build config 00:02:52.412 test-sad: explicitly disabled via build config 00:02:52.412 test-security-perf: explicitly disabled via build config 00:02:52.412 00:02:52.412 libs: 00:02:52.412 argparse: explicitly disabled via build config 00:02:52.412 metrics: explicitly disabled via build config 00:02:52.412 acl: explicitly disabled via build config 00:02:52.412 bbdev: explicitly disabled via build config 00:02:52.412 bitratestats: explicitly disabled via build config 00:02:52.412 bpf: explicitly disabled via build config 00:02:52.412 cfgfile: explicitly disabled via build config 00:02:52.412 distributor: explicitly disabled via build config 00:02:52.412 efd: explicitly disabled via build config 00:02:52.412 eventdev: explicitly disabled via build config 00:02:52.412 dispatcher: explicitly disabled via build config 00:02:52.412 gpudev: explicitly disabled via build config 00:02:52.412 gro: explicitly disabled via build config 00:02:52.412 gso: explicitly disabled via build config 00:02:52.412 ip_frag: explicitly disabled via build config 00:02:52.412 jobstats: explicitly disabled via build config 00:02:52.412 latencystats: explicitly disabled via build config 00:02:52.412 lpm: explicitly disabled via build config 00:02:52.412 member: explicitly disabled via build config 00:02:52.412 pcapng: explicitly disabled via build config 00:02:52.412 rawdev: explicitly disabled via build config 00:02:52.412 regexdev: explicitly disabled via build config 00:02:52.412 mldev: explicitly disabled via build config 00:02:52.412 rib: explicitly disabled via build config 00:02:52.412 sched: explicitly disabled via build config 00:02:52.413 stack: explicitly disabled via build config 00:02:52.413 ipsec: explicitly disabled via build config 00:02:52.413 pdcp: explicitly disabled via build config 00:02:52.413 fib: explicitly disabled via build config 00:02:52.413 port: explicitly disabled via build config 00:02:52.413 pdump: explicitly disabled via build config 00:02:52.413 table: explicitly disabled via build config 00:02:52.413 pipeline: explicitly disabled via build config 00:02:52.413 graph: explicitly disabled via build config 00:02:52.413 node: explicitly disabled via build config 00:02:52.413 00:02:52.413 drivers: 00:02:52.413 common/cpt: not in enabled drivers build config 00:02:52.413 common/dpaax: not in enabled drivers build config 00:02:52.413 common/iavf: not in enabled drivers build config 00:02:52.413 common/idpf: not in enabled drivers build config 00:02:52.413 common/ionic: not in enabled drivers build config 00:02:52.413 common/mvep: not in enabled drivers build config 00:02:52.413 common/octeontx: not in enabled drivers build config 00:02:52.413 bus/auxiliary: not in enabled drivers build config 00:02:52.413 bus/cdx: not in enabled drivers build config 00:02:52.413 bus/dpaa: not in enabled drivers build config 00:02:52.413 bus/fslmc: not in enabled drivers build config 00:02:52.413 bus/ifpga: not in enabled drivers build config 00:02:52.413 bus/platform: not in enabled drivers build config 00:02:52.413 bus/uacce: not in enabled drivers build config 00:02:52.413 bus/vmbus: not in enabled drivers build config 00:02:52.413 common/cnxk: not in enabled drivers build config 00:02:52.413 common/mlx5: not in enabled drivers build config 00:02:52.413 common/nfp: not in enabled drivers build config 00:02:52.413 common/nitrox: not in enabled drivers build config 00:02:52.413 common/qat: not in enabled drivers build config 00:02:52.413 common/sfc_efx: not in enabled drivers build config 00:02:52.413 mempool/bucket: not in enabled drivers build config 00:02:52.413 mempool/cnxk: not in enabled drivers build config 00:02:52.413 mempool/dpaa: not in enabled drivers build config 00:02:52.413 mempool/dpaa2: not in enabled drivers build config 00:02:52.413 mempool/octeontx: not in enabled drivers build config 00:02:52.413 mempool/stack: not in enabled drivers build config 00:02:52.413 dma/cnxk: not in enabled drivers build config 00:02:52.413 dma/dpaa: not in enabled drivers build config 00:02:52.413 dma/dpaa2: not in enabled drivers build config 00:02:52.413 dma/hisilicon: not in enabled drivers build config 00:02:52.413 dma/idxd: not in enabled drivers build config 00:02:52.413 dma/ioat: not in enabled drivers build config 00:02:52.413 dma/skeleton: not in enabled drivers build config 00:02:52.413 net/af_packet: not in enabled drivers build config 00:02:52.413 net/af_xdp: not in enabled drivers build config 00:02:52.413 net/ark: not in enabled drivers build config 00:02:52.413 net/atlantic: not in enabled drivers build config 00:02:52.413 net/avp: not in enabled drivers build config 00:02:52.413 net/axgbe: not in enabled drivers build config 00:02:52.413 net/bnx2x: not in enabled drivers build config 00:02:52.413 net/bnxt: not in enabled drivers build config 00:02:52.413 net/bonding: not in enabled drivers build config 00:02:52.413 net/cnxk: not in enabled drivers build config 00:02:52.413 net/cpfl: not in enabled drivers build config 00:02:52.413 net/cxgbe: not in enabled drivers build config 00:02:52.413 net/dpaa: not in enabled drivers build config 00:02:52.413 net/dpaa2: not in enabled drivers build config 00:02:52.413 net/e1000: not in enabled drivers build config 00:02:52.413 net/ena: not in enabled drivers build config 00:02:52.413 net/enetc: not in enabled drivers build config 00:02:52.413 net/enetfec: not in enabled drivers build config 00:02:52.413 net/enic: not in enabled drivers build config 00:02:52.413 net/failsafe: not in enabled drivers build config 00:02:52.413 net/fm10k: not in enabled drivers build config 00:02:52.413 net/gve: not in enabled drivers build config 00:02:52.413 net/hinic: not in enabled drivers build config 00:02:52.413 net/hns3: not in enabled drivers build config 00:02:52.413 net/i40e: not in enabled drivers build config 00:02:52.413 net/iavf: not in enabled drivers build config 00:02:52.413 net/ice: not in enabled drivers build config 00:02:52.413 net/idpf: not in enabled drivers build config 00:02:52.413 net/igc: not in enabled drivers build config 00:02:52.413 net/ionic: not in enabled drivers build config 00:02:52.413 net/ipn3ke: not in enabled drivers build config 00:02:52.413 net/ixgbe: not in enabled drivers build config 00:02:52.413 net/mana: not in enabled drivers build config 00:02:52.413 net/memif: not in enabled drivers build config 00:02:52.413 net/mlx4: not in enabled drivers build config 00:02:52.413 net/mlx5: not in enabled drivers build config 00:02:52.413 net/mvneta: not in enabled drivers build config 00:02:52.413 net/mvpp2: not in enabled drivers build config 00:02:52.413 net/netvsc: not in enabled drivers build config 00:02:52.413 net/nfb: not in enabled drivers build config 00:02:52.413 net/nfp: not in enabled drivers build config 00:02:52.413 net/ngbe: not in enabled drivers build config 00:02:52.413 net/null: not in enabled drivers build config 00:02:52.413 net/octeontx: not in enabled drivers build config 00:02:52.413 net/octeon_ep: not in enabled drivers build config 00:02:52.413 net/pcap: not in enabled drivers build config 00:02:52.413 net/pfe: not in enabled drivers build config 00:02:52.413 net/qede: not in enabled drivers build config 00:02:52.413 net/ring: not in enabled drivers build config 00:02:52.413 net/sfc: not in enabled drivers build config 00:02:52.413 net/softnic: not in enabled drivers build config 00:02:52.413 net/tap: not in enabled drivers build config 00:02:52.413 net/thunderx: not in enabled drivers build config 00:02:52.413 net/txgbe: not in enabled drivers build config 00:02:52.413 net/vdev_netvsc: not in enabled drivers build config 00:02:52.413 net/vhost: not in enabled drivers build config 00:02:52.413 net/virtio: not in enabled drivers build config 00:02:52.413 net/vmxnet3: not in enabled drivers build config 00:02:52.413 raw/*: missing internal dependency, "rawdev" 00:02:52.413 crypto/armv8: not in enabled drivers build config 00:02:52.413 crypto/bcmfs: not in enabled drivers build config 00:02:52.413 crypto/caam_jr: not in enabled drivers build config 00:02:52.413 crypto/ccp: not in enabled drivers build config 00:02:52.413 crypto/cnxk: not in enabled drivers build config 00:02:52.413 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.413 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.413 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.413 crypto/mlx5: not in enabled drivers build config 00:02:52.413 crypto/mvsam: not in enabled drivers build config 00:02:52.413 crypto/nitrox: not in enabled drivers build config 00:02:52.413 crypto/null: not in enabled drivers build config 00:02:52.413 crypto/octeontx: not in enabled drivers build config 00:02:52.413 crypto/openssl: not in enabled drivers build config 00:02:52.413 crypto/scheduler: not in enabled drivers build config 00:02:52.413 crypto/uadk: not in enabled drivers build config 00:02:52.413 crypto/virtio: not in enabled drivers build config 00:02:52.413 compress/isal: not in enabled drivers build config 00:02:52.413 compress/mlx5: not in enabled drivers build config 00:02:52.413 compress/nitrox: not in enabled drivers build config 00:02:52.413 compress/octeontx: not in enabled drivers build config 00:02:52.413 compress/zlib: not in enabled drivers build config 00:02:52.413 regex/*: missing internal dependency, "regexdev" 00:02:52.413 ml/*: missing internal dependency, "mldev" 00:02:52.413 vdpa/ifc: not in enabled drivers build config 00:02:52.413 vdpa/mlx5: not in enabled drivers build config 00:02:52.413 vdpa/nfp: not in enabled drivers build config 00:02:52.413 vdpa/sfc: not in enabled drivers build config 00:02:52.413 event/*: missing internal dependency, "eventdev" 00:02:52.413 baseband/*: missing internal dependency, "bbdev" 00:02:52.413 gpu/*: missing internal dependency, "gpudev" 00:02:52.413 00:02:52.413 00:02:52.672 Build targets in project: 84 00:02:52.672 00:02:52.672 DPDK 24.03.0 00:02:52.672 00:02:52.672 User defined options 00:02:52.672 buildtype : debug 00:02:52.672 default_library : shared 00:02:52.672 libdir : lib 00:02:52.672 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:52.672 b_sanitize : address 00:02:52.672 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:52.672 c_link_args : 00:02:52.672 cpu_instruction_set: native 00:02:52.672 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:52.672 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:52.672 enable_docs : false 00:02:52.672 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:52.672 enable_kmods : false 00:02:52.672 max_lcores : 128 00:02:52.672 tests : false 00:02:52.672 00:02:52.672 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.932 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:52.932 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:52.932 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.932 [3/267] Linking static target lib/librte_kvargs.a 00:02:52.932 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:52.932 [5/267] Linking static target lib/librte_log.a 00:02:52.932 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:53.191 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:53.191 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:53.191 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:53.191 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.449 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:53.449 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.449 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.449 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.449 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:53.449 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:53.449 [17/267] Linking static target lib/librte_telemetry.a 00:02:53.449 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:53.708 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:53.708 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.708 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:53.708 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:53.708 [23/267] Linking target lib/librte_log.so.24.1 00:02:53.708 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.708 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:53.969 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.969 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:53.969 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:53.969 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.969 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.969 [31/267] Linking target lib/librte_kvargs.so.24.1 00:02:53.969 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:53.969 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:54.232 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:54.232 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.232 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:54.232 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:54.232 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:54.232 [39/267] Linking target lib/librte_telemetry.so.24.1 00:02:54.232 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:54.232 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:54.232 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:54.232 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:54.493 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:54.493 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:54.493 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:54.493 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:54.493 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:54.493 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:54.493 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:54.493 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:54.754 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:54.754 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.754 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:54.754 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:54.754 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:54.754 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:54.754 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:54.754 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:55.012 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:55.012 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:55.012 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:55.012 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:55.012 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:55.012 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:55.012 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:55.269 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:55.269 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:55.269 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:55.269 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:55.269 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:55.527 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:55.527 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:55.527 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.527 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:55.527 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:55.527 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:55.527 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:55.527 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:55.527 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:55.783 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:55.783 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:55.783 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:55.783 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:55.783 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:55.783 [86/267] Linking static target lib/librte_ring.a 00:02:56.040 [87/267] Linking static target lib/librte_eal.a 00:02:56.040 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:56.040 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:56.040 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:56.040 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:56.040 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:56.040 [93/267] Linking static target lib/librte_mempool.a 00:02:56.040 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:56.040 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:56.040 [96/267] Linking static target lib/librte_rcu.a 00:02:56.297 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.297 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:56.297 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:56.555 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:56.555 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:56.555 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:56.555 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.555 [104/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:56.812 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:56.812 [106/267] Linking static target lib/librte_meter.a 00:02:56.812 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:57.069 [108/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:57.069 [109/267] Linking static target lib/librte_mbuf.a 00:02:57.069 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:57.069 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:57.069 [112/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.069 [113/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:57.069 [114/267] Linking static target lib/librte_net.a 00:02:57.069 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:57.327 [116/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.327 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.584 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:57.584 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:57.584 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:57.842 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:57.842 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:57.842 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:57.842 [124/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.842 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:57.842 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:57.842 [127/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:57.842 [128/267] Linking static target lib/librte_pci.a 00:02:58.100 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:58.100 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:58.100 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:58.100 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:58.100 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:58.100 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:58.100 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:58.100 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:58.100 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:58.100 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:58.100 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:58.100 [140/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.100 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:58.388 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:58.388 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:58.388 [144/267] Linking static target lib/librte_cmdline.a 00:02:58.388 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:58.388 [146/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:58.646 [147/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:58.646 [148/267] Linking static target lib/librte_timer.a 00:02:58.646 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:58.646 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:58.646 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:58.646 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:58.646 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:58.904 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:58.904 [155/267] Linking static target lib/librte_ethdev.a 00:02:58.904 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:58.904 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:58.904 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:58.904 [159/267] Linking static target lib/librte_hash.a 00:02:59.162 [160/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:59.162 [161/267] Linking static target lib/librte_compressdev.a 00:02:59.162 [162/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:59.162 [163/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.162 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:59.162 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:59.162 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:59.422 [167/267] Linking static target lib/librte_dmadev.a 00:02:59.422 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:59.422 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:59.422 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:59.422 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:59.422 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.683 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:59.683 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.683 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:59.683 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:59.683 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:59.683 [178/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.941 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:59.941 [180/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:59.941 [181/267] Linking static target lib/librte_cryptodev.a 00:02:59.941 [182/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.941 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:59.941 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:59.941 [185/267] Linking static target lib/librte_power.a 00:03:00.199 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:00.199 [187/267] Linking static target lib/librte_reorder.a 00:03:00.199 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:00.457 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.457 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:00.457 [191/267] Linking static target lib/librte_security.a 00:03:00.457 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:00.457 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:00.457 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.716 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:00.974 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.974 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:00.974 [198/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.974 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:01.235 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:01.235 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:01.235 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:01.235 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:01.235 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:01.496 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:01.496 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:01.496 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:01.496 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:01.496 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:01.758 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.758 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:01.758 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:01.758 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.758 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.758 [215/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.758 [216/267] Linking static target drivers/librte_bus_vdev.a 00:03:01.758 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.758 [218/267] Linking static target drivers/librte_bus_pci.a 00:03:01.758 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:01.758 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:02.017 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.017 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:02.017 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.017 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.017 [225/267] Linking static target drivers/librte_mempool_ring.a 00:03:02.017 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.275 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:03.652 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.652 [229/267] Linking target lib/librte_eal.so.24.1 00:03:03.652 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:03.652 [231/267] Linking target lib/librte_ring.so.24.1 00:03:03.652 [232/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:03.652 [233/267] Linking target lib/librte_pci.so.24.1 00:03:03.652 [234/267] Linking target lib/librte_timer.so.24.1 00:03:03.652 [235/267] Linking target lib/librte_meter.so.24.1 00:03:03.652 [236/267] Linking target lib/librte_dmadev.so.24.1 00:03:03.652 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:03.652 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:03.652 [239/267] Linking target lib/librte_rcu.so.24.1 00:03:03.652 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:03.652 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:03.652 [242/267] Linking target lib/librte_mempool.so.24.1 00:03:03.652 [243/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:03.652 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:03.910 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:03.910 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:03.910 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:03.910 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:03.910 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:04.168 [250/267] Linking target lib/librte_reorder.so.24.1 00:03:04.168 [251/267] Linking target lib/librte_compressdev.so.24.1 00:03:04.168 [252/267] Linking target lib/librte_net.so.24.1 00:03:04.168 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:04.168 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:04.168 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:04.168 [256/267] Linking target lib/librte_cmdline.so.24.1 00:03:04.168 [257/267] Linking target lib/librte_security.so.24.1 00:03:04.168 [258/267] Linking target lib/librte_hash.so.24.1 00:03:04.168 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.426 [260/267] Linking target lib/librte_ethdev.so.24.1 00:03:04.426 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:04.426 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:04.426 [263/267] Linking target lib/librte_power.so.24.1 00:03:04.992 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:04.992 [265/267] Linking static target lib/librte_vhost.a 00:03:06.366 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.366 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:06.366 INFO: autodetecting backend as ninja 00:03:06.366 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:21.304 CC lib/ut_mock/mock.o 00:03:21.304 CC lib/log/log.o 00:03:21.304 CC lib/log/log_flags.o 00:03:21.304 CC lib/log/log_deprecated.o 00:03:21.304 CC lib/ut/ut.o 00:03:21.304 LIB libspdk_ut.a 00:03:21.304 LIB libspdk_ut_mock.a 00:03:21.304 LIB libspdk_log.a 00:03:21.304 SO libspdk_ut.so.2.0 00:03:21.304 SO libspdk_log.so.7.1 00:03:21.304 SO libspdk_ut_mock.so.6.0 00:03:21.304 SYMLINK libspdk_ut.so 00:03:21.304 SYMLINK libspdk_ut_mock.so 00:03:21.304 SYMLINK libspdk_log.so 00:03:21.304 CXX lib/trace_parser/trace.o 00:03:21.304 CC lib/util/base64.o 00:03:21.304 CC lib/util/bit_array.o 00:03:21.304 CC lib/dma/dma.o 00:03:21.304 CC lib/util/crc16.o 00:03:21.304 CC lib/util/cpuset.o 00:03:21.304 CC lib/util/crc32.o 00:03:21.304 CC lib/util/crc32c.o 00:03:21.304 CC lib/ioat/ioat.o 00:03:21.304 CC lib/vfio_user/host/vfio_user_pci.o 00:03:21.304 CC lib/util/crc32_ieee.o 00:03:21.304 CC lib/util/crc64.o 00:03:21.304 CC lib/util/dif.o 00:03:21.304 LIB libspdk_dma.a 00:03:21.304 SO libspdk_dma.so.5.0 00:03:21.304 CC lib/util/fd.o 00:03:21.304 CC lib/vfio_user/host/vfio_user.o 00:03:21.304 SYMLINK libspdk_dma.so 00:03:21.304 CC lib/util/fd_group.o 00:03:21.304 CC lib/util/file.o 00:03:21.304 CC lib/util/hexlify.o 00:03:21.304 CC lib/util/iov.o 00:03:21.304 CC lib/util/math.o 00:03:21.304 CC lib/util/net.o 00:03:21.304 LIB libspdk_ioat.a 00:03:21.304 SO libspdk_ioat.so.7.0 00:03:21.304 LIB libspdk_vfio_user.a 00:03:21.304 CC lib/util/pipe.o 00:03:21.304 CC lib/util/strerror_tls.o 00:03:21.304 SO libspdk_vfio_user.so.5.0 00:03:21.304 SYMLINK libspdk_ioat.so 00:03:21.304 CC lib/util/string.o 00:03:21.304 SYMLINK libspdk_vfio_user.so 00:03:21.304 CC lib/util/uuid.o 00:03:21.304 CC lib/util/xor.o 00:03:21.304 CC lib/util/zipf.o 00:03:21.304 CC lib/util/md5.o 00:03:21.304 LIB libspdk_util.a 00:03:21.304 SO libspdk_util.so.10.0 00:03:21.304 LIB libspdk_trace_parser.a 00:03:21.304 SO libspdk_trace_parser.so.6.0 00:03:21.304 SYMLINK libspdk_util.so 00:03:21.304 SYMLINK libspdk_trace_parser.so 00:03:21.305 CC lib/idxd/idxd.o 00:03:21.305 CC lib/idxd/idxd_user.o 00:03:21.305 CC lib/vmd/vmd.o 00:03:21.305 CC lib/idxd/idxd_kernel.o 00:03:21.305 CC lib/rdma_utils/rdma_utils.o 00:03:21.305 CC lib/vmd/led.o 00:03:21.305 CC lib/rdma_provider/common.o 00:03:21.305 CC lib/conf/conf.o 00:03:21.305 CC lib/json/json_parse.o 00:03:21.305 CC lib/env_dpdk/env.o 00:03:21.305 CC lib/env_dpdk/memory.o 00:03:21.305 CC lib/env_dpdk/pci.o 00:03:21.305 LIB libspdk_conf.a 00:03:21.305 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:21.305 SO libspdk_conf.so.6.0 00:03:21.305 CC lib/json/json_util.o 00:03:21.305 SYMLINK libspdk_conf.so 00:03:21.305 CC lib/json/json_write.o 00:03:21.305 CC lib/env_dpdk/init.o 00:03:21.305 LIB libspdk_rdma_utils.a 00:03:21.305 SO libspdk_rdma_utils.so.1.0 00:03:21.305 SYMLINK libspdk_rdma_utils.so 00:03:21.305 CC lib/env_dpdk/threads.o 00:03:21.305 LIB libspdk_rdma_provider.a 00:03:21.305 SO libspdk_rdma_provider.so.6.0 00:03:21.305 CC lib/env_dpdk/pci_ioat.o 00:03:21.305 SYMLINK libspdk_rdma_provider.so 00:03:21.305 CC lib/env_dpdk/pci_virtio.o 00:03:21.305 CC lib/env_dpdk/pci_vmd.o 00:03:21.305 CC lib/env_dpdk/pci_idxd.o 00:03:21.305 LIB libspdk_json.a 00:03:21.305 CC lib/env_dpdk/pci_event.o 00:03:21.305 SO libspdk_json.so.6.0 00:03:21.305 CC lib/env_dpdk/sigbus_handler.o 00:03:21.305 CC lib/env_dpdk/pci_dpdk.o 00:03:21.305 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:21.305 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:21.305 SYMLINK libspdk_json.so 00:03:21.305 LIB libspdk_idxd.a 00:03:21.305 SO libspdk_idxd.so.12.1 00:03:21.305 LIB libspdk_vmd.a 00:03:21.305 SYMLINK libspdk_idxd.so 00:03:21.305 CC lib/jsonrpc/jsonrpc_server.o 00:03:21.305 SO libspdk_vmd.so.6.0 00:03:21.305 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:21.305 CC lib/jsonrpc/jsonrpc_client.o 00:03:21.305 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:21.305 SYMLINK libspdk_vmd.so 00:03:21.564 LIB libspdk_jsonrpc.a 00:03:21.564 SO libspdk_jsonrpc.so.6.0 00:03:21.564 SYMLINK libspdk_jsonrpc.so 00:03:21.822 CC lib/rpc/rpc.o 00:03:22.083 LIB libspdk_env_dpdk.a 00:03:22.083 SO libspdk_env_dpdk.so.15.1 00:03:22.083 LIB libspdk_rpc.a 00:03:22.083 SO libspdk_rpc.so.6.0 00:03:22.083 SYMLINK libspdk_rpc.so 00:03:22.342 SYMLINK libspdk_env_dpdk.so 00:03:22.342 CC lib/notify/notify.o 00:03:22.342 CC lib/trace/trace.o 00:03:22.342 CC lib/notify/notify_rpc.o 00:03:22.342 CC lib/trace/trace_rpc.o 00:03:22.342 CC lib/trace/trace_flags.o 00:03:22.342 CC lib/keyring/keyring_rpc.o 00:03:22.342 CC lib/keyring/keyring.o 00:03:22.602 LIB libspdk_notify.a 00:03:22.602 SO libspdk_notify.so.6.0 00:03:22.602 SYMLINK libspdk_notify.so 00:03:22.602 LIB libspdk_keyring.a 00:03:22.602 LIB libspdk_trace.a 00:03:22.602 SO libspdk_keyring.so.2.0 00:03:22.602 SO libspdk_trace.so.11.0 00:03:22.602 SYMLINK libspdk_keyring.so 00:03:22.602 SYMLINK libspdk_trace.so 00:03:22.864 CC lib/sock/sock.o 00:03:22.864 CC lib/sock/sock_rpc.o 00:03:22.864 CC lib/thread/iobuf.o 00:03:22.864 CC lib/thread/thread.o 00:03:23.429 LIB libspdk_sock.a 00:03:23.429 SO libspdk_sock.so.10.0 00:03:23.429 SYMLINK libspdk_sock.so 00:03:23.687 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:23.687 CC lib/nvme/nvme_ctrlr.o 00:03:23.687 CC lib/nvme/nvme_ns_cmd.o 00:03:23.687 CC lib/nvme/nvme_fabric.o 00:03:23.687 CC lib/nvme/nvme_ns.o 00:03:23.687 CC lib/nvme/nvme_pcie_common.o 00:03:23.687 CC lib/nvme/nvme_qpair.o 00:03:23.687 CC lib/nvme/nvme_pcie.o 00:03:23.687 CC lib/nvme/nvme.o 00:03:24.256 CC lib/nvme/nvme_quirks.o 00:03:24.256 CC lib/nvme/nvme_transport.o 00:03:24.256 CC lib/nvme/nvme_discovery.o 00:03:24.256 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:24.514 LIB libspdk_thread.a 00:03:24.514 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:24.514 CC lib/nvme/nvme_tcp.o 00:03:24.514 SO libspdk_thread.so.11.0 00:03:24.514 CC lib/nvme/nvme_opal.o 00:03:24.514 SYMLINK libspdk_thread.so 00:03:24.514 CC lib/nvme/nvme_io_msg.o 00:03:24.514 CC lib/nvme/nvme_poll_group.o 00:03:24.772 CC lib/nvme/nvme_zns.o 00:03:24.772 CC lib/nvme/nvme_stubs.o 00:03:24.772 CC lib/accel/accel.o 00:03:25.030 CC lib/accel/accel_rpc.o 00:03:25.030 CC lib/accel/accel_sw.o 00:03:25.030 CC lib/blob/blobstore.o 00:03:25.030 CC lib/blob/request.o 00:03:25.030 CC lib/blob/zeroes.o 00:03:25.030 CC lib/init/json_config.o 00:03:25.288 CC lib/blob/blob_bs_dev.o 00:03:25.289 CC lib/init/subsystem.o 00:03:25.289 CC lib/init/subsystem_rpc.o 00:03:25.289 CC lib/virtio/virtio.o 00:03:25.289 CC lib/virtio/virtio_vhost_user.o 00:03:25.289 CC lib/nvme/nvme_auth.o 00:03:25.289 CC lib/virtio/virtio_vfio_user.o 00:03:25.289 CC lib/init/rpc.o 00:03:25.548 CC lib/nvme/nvme_cuse.o 00:03:25.548 CC lib/nvme/nvme_rdma.o 00:03:25.548 LIB libspdk_init.a 00:03:25.548 CC lib/virtio/virtio_pci.o 00:03:25.548 SO libspdk_init.so.6.0 00:03:25.548 SYMLINK libspdk_init.so 00:03:25.806 CC lib/event/reactor.o 00:03:25.806 CC lib/event/app.o 00:03:25.806 CC lib/fsdev/fsdev.o 00:03:25.806 LIB libspdk_virtio.a 00:03:25.806 SO libspdk_virtio.so.7.0 00:03:25.806 CC lib/fsdev/fsdev_io.o 00:03:25.806 LIB libspdk_accel.a 00:03:25.806 SYMLINK libspdk_virtio.so 00:03:25.806 CC lib/event/log_rpc.o 00:03:26.064 SO libspdk_accel.so.16.0 00:03:26.064 SYMLINK libspdk_accel.so 00:03:26.064 CC lib/event/app_rpc.o 00:03:26.064 CC lib/event/scheduler_static.o 00:03:26.064 CC lib/fsdev/fsdev_rpc.o 00:03:26.322 CC lib/bdev/bdev.o 00:03:26.322 CC lib/bdev/part.o 00:03:26.322 CC lib/bdev/bdev_rpc.o 00:03:26.322 CC lib/bdev/scsi_nvme.o 00:03:26.322 CC lib/bdev/bdev_zone.o 00:03:26.322 LIB libspdk_event.a 00:03:26.322 SO libspdk_event.so.14.0 00:03:26.322 LIB libspdk_fsdev.a 00:03:26.322 SYMLINK libspdk_event.so 00:03:26.322 SO libspdk_fsdev.so.2.0 00:03:26.322 SYMLINK libspdk_fsdev.so 00:03:26.580 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:26.838 LIB libspdk_nvme.a 00:03:27.098 SO libspdk_nvme.so.14.1 00:03:27.098 SYMLINK libspdk_nvme.so 00:03:27.356 LIB libspdk_fuse_dispatcher.a 00:03:27.356 SO libspdk_fuse_dispatcher.so.1.0 00:03:27.356 SYMLINK libspdk_fuse_dispatcher.so 00:03:28.290 LIB libspdk_blob.a 00:03:28.290 SO libspdk_blob.so.11.0 00:03:28.290 LIB libspdk_bdev.a 00:03:28.547 SYMLINK libspdk_blob.so 00:03:28.547 SO libspdk_bdev.so.17.0 00:03:28.547 SYMLINK libspdk_bdev.so 00:03:28.547 CC lib/blobfs/blobfs.o 00:03:28.547 CC lib/blobfs/tree.o 00:03:28.547 CC lib/lvol/lvol.o 00:03:28.547 CC lib/nbd/nbd.o 00:03:28.547 CC lib/nbd/nbd_rpc.o 00:03:28.547 CC lib/scsi/dev.o 00:03:28.547 CC lib/scsi/lun.o 00:03:28.547 CC lib/ftl/ftl_core.o 00:03:28.547 CC lib/ublk/ublk.o 00:03:28.547 CC lib/nvmf/ctrlr.o 00:03:28.805 CC lib/nvmf/ctrlr_discovery.o 00:03:28.806 CC lib/nvmf/ctrlr_bdev.o 00:03:28.806 CC lib/nvmf/subsystem.o 00:03:29.064 CC lib/scsi/port.o 00:03:29.064 LIB libspdk_nbd.a 00:03:29.064 SO libspdk_nbd.so.7.0 00:03:29.064 SYMLINK libspdk_nbd.so 00:03:29.064 CC lib/nvmf/nvmf.o 00:03:29.064 CC lib/ftl/ftl_init.o 00:03:29.064 CC lib/scsi/scsi.o 00:03:29.064 CC lib/nvmf/nvmf_rpc.o 00:03:29.064 CC lib/ublk/ublk_rpc.o 00:03:29.322 CC lib/ftl/ftl_layout.o 00:03:29.322 CC lib/scsi/scsi_bdev.o 00:03:29.322 LIB libspdk_ublk.a 00:03:29.322 SO libspdk_ublk.so.3.0 00:03:29.322 SYMLINK libspdk_ublk.so 00:03:29.322 CC lib/scsi/scsi_pr.o 00:03:29.322 LIB libspdk_lvol.a 00:03:29.323 SO libspdk_lvol.so.10.0 00:03:29.323 LIB libspdk_blobfs.a 00:03:29.323 CC lib/scsi/scsi_rpc.o 00:03:29.323 SYMLINK libspdk_lvol.so 00:03:29.323 SO libspdk_blobfs.so.10.0 00:03:29.323 CC lib/nvmf/transport.o 00:03:29.581 SYMLINK libspdk_blobfs.so 00:03:29.581 CC lib/nvmf/tcp.o 00:03:29.581 CC lib/ftl/ftl_debug.o 00:03:29.581 CC lib/scsi/task.o 00:03:29.581 CC lib/ftl/ftl_io.o 00:03:29.581 CC lib/ftl/ftl_sb.o 00:03:29.581 CC lib/ftl/ftl_l2p.o 00:03:29.581 CC lib/nvmf/stubs.o 00:03:29.839 LIB libspdk_scsi.a 00:03:29.839 CC lib/ftl/ftl_l2p_flat.o 00:03:29.839 CC lib/ftl/ftl_nv_cache.o 00:03:29.839 SO libspdk_scsi.so.9.0 00:03:29.839 CC lib/nvmf/mdns_server.o 00:03:29.839 SYMLINK libspdk_scsi.so 00:03:29.839 CC lib/nvmf/rdma.o 00:03:29.839 CC lib/ftl/ftl_band.o 00:03:29.839 CC lib/ftl/ftl_band_ops.o 00:03:29.839 CC lib/ftl/ftl_writer.o 00:03:30.097 CC lib/ftl/ftl_rq.o 00:03:30.097 CC lib/ftl/ftl_reloc.o 00:03:30.097 CC lib/nvmf/auth.o 00:03:30.098 CC lib/ftl/ftl_l2p_cache.o 00:03:30.098 CC lib/ftl/ftl_p2l.o 00:03:30.356 CC lib/iscsi/conn.o 00:03:30.356 CC lib/ftl/ftl_p2l_log.o 00:03:30.356 CC lib/ftl/mngt/ftl_mngt.o 00:03:30.356 CC lib/iscsi/init_grp.o 00:03:30.614 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:30.614 CC lib/iscsi/iscsi.o 00:03:30.614 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:30.614 CC lib/vhost/vhost.o 00:03:30.614 CC lib/vhost/vhost_rpc.o 00:03:30.614 CC lib/vhost/vhost_scsi.o 00:03:30.614 CC lib/vhost/vhost_blk.o 00:03:30.614 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:30.873 CC lib/vhost/rte_vhost_user.o 00:03:30.873 CC lib/iscsi/param.o 00:03:30.873 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:30.873 CC lib/iscsi/portal_grp.o 00:03:31.131 CC lib/iscsi/tgt_node.o 00:03:31.131 CC lib/iscsi/iscsi_subsystem.o 00:03:31.131 CC lib/iscsi/iscsi_rpc.o 00:03:31.131 CC lib/iscsi/task.o 00:03:31.131 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:31.131 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:31.390 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:31.390 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:31.390 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:31.390 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:31.390 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:31.390 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:31.390 CC lib/ftl/utils/ftl_conf.o 00:03:31.390 CC lib/ftl/utils/ftl_md.o 00:03:31.390 LIB libspdk_vhost.a 00:03:31.648 CC lib/ftl/utils/ftl_mempool.o 00:03:31.648 CC lib/ftl/utils/ftl_bitmap.o 00:03:31.648 SO libspdk_vhost.so.8.0 00:03:31.648 CC lib/ftl/utils/ftl_property.o 00:03:31.648 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:31.648 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:31.648 SYMLINK libspdk_vhost.so 00:03:31.648 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:31.648 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:31.648 LIB libspdk_iscsi.a 00:03:31.648 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:31.648 SO libspdk_iscsi.so.8.0 00:03:31.648 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:31.648 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:31.648 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:31.935 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:31.935 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:31.935 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:31.935 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:31.935 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:31.935 SYMLINK libspdk_iscsi.so 00:03:31.935 CC lib/ftl/base/ftl_base_dev.o 00:03:31.935 CC lib/ftl/base/ftl_base_bdev.o 00:03:31.935 LIB libspdk_nvmf.a 00:03:31.935 CC lib/ftl/ftl_trace.o 00:03:31.935 SO libspdk_nvmf.so.20.0 00:03:32.215 LIB libspdk_ftl.a 00:03:32.215 SYMLINK libspdk_nvmf.so 00:03:32.215 SO libspdk_ftl.so.9.0 00:03:32.475 SYMLINK libspdk_ftl.so 00:03:32.736 CC module/env_dpdk/env_dpdk_rpc.o 00:03:32.736 CC module/accel/error/accel_error.o 00:03:32.736 CC module/accel/ioat/accel_ioat.o 00:03:32.736 CC module/sock/posix/posix.o 00:03:32.736 CC module/fsdev/aio/fsdev_aio.o 00:03:32.736 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:32.736 CC module/accel/dsa/accel_dsa.o 00:03:32.736 CC module/blob/bdev/blob_bdev.o 00:03:32.736 CC module/keyring/file/keyring.o 00:03:32.736 CC module/keyring/linux/keyring.o 00:03:32.736 LIB libspdk_env_dpdk_rpc.a 00:03:32.736 SO libspdk_env_dpdk_rpc.so.6.0 00:03:32.736 SYMLINK libspdk_env_dpdk_rpc.so 00:03:32.736 CC module/keyring/file/keyring_rpc.o 00:03:32.996 CC module/accel/error/accel_error_rpc.o 00:03:32.997 CC module/accel/ioat/accel_ioat_rpc.o 00:03:32.997 CC module/keyring/linux/keyring_rpc.o 00:03:32.997 LIB libspdk_scheduler_dynamic.a 00:03:32.997 SO libspdk_scheduler_dynamic.so.4.0 00:03:32.997 LIB libspdk_keyring_file.a 00:03:32.997 LIB libspdk_accel_error.a 00:03:32.997 CC module/accel/dsa/accel_dsa_rpc.o 00:03:32.997 SO libspdk_keyring_file.so.2.0 00:03:32.997 LIB libspdk_accel_ioat.a 00:03:32.997 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:32.997 SO libspdk_accel_error.so.2.0 00:03:32.997 LIB libspdk_keyring_linux.a 00:03:32.997 SO libspdk_accel_ioat.so.6.0 00:03:32.997 SYMLINK libspdk_scheduler_dynamic.so 00:03:32.997 LIB libspdk_blob_bdev.a 00:03:32.997 SYMLINK libspdk_keyring_file.so 00:03:32.997 SO libspdk_keyring_linux.so.1.0 00:03:32.997 SO libspdk_blob_bdev.so.11.0 00:03:32.997 SYMLINK libspdk_accel_error.so 00:03:32.997 SYMLINK libspdk_accel_ioat.so 00:03:32.997 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:32.997 CC module/fsdev/aio/linux_aio_mgr.o 00:03:32.997 LIB libspdk_accel_dsa.a 00:03:32.997 SYMLINK libspdk_keyring_linux.so 00:03:32.997 SYMLINK libspdk_blob_bdev.so 00:03:32.997 SO libspdk_accel_dsa.so.5.0 00:03:32.997 LIB libspdk_scheduler_dpdk_governor.a 00:03:33.257 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:33.258 CC module/scheduler/gscheduler/gscheduler.o 00:03:33.258 SYMLINK libspdk_accel_dsa.so 00:03:33.258 CC module/accel/iaa/accel_iaa.o 00:03:33.258 CC module/accel/iaa/accel_iaa_rpc.o 00:03:33.258 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:33.258 LIB libspdk_scheduler_gscheduler.a 00:03:33.258 CC module/blobfs/bdev/blobfs_bdev.o 00:03:33.258 SO libspdk_scheduler_gscheduler.so.4.0 00:03:33.258 CC module/bdev/delay/vbdev_delay.o 00:03:33.258 LIB libspdk_accel_iaa.a 00:03:33.258 CC module/bdev/gpt/gpt.o 00:03:33.258 CC module/bdev/lvol/vbdev_lvol.o 00:03:33.258 CC module/bdev/error/vbdev_error.o 00:03:33.258 SO libspdk_accel_iaa.so.3.0 00:03:33.258 SYMLINK libspdk_scheduler_gscheduler.so 00:03:33.258 CC module/bdev/gpt/vbdev_gpt.o 00:03:33.258 SYMLINK libspdk_accel_iaa.so 00:03:33.258 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:33.519 CC module/bdev/malloc/bdev_malloc.o 00:03:33.519 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:33.519 LIB libspdk_fsdev_aio.a 00:03:33.519 LIB libspdk_sock_posix.a 00:03:33.519 SO libspdk_fsdev_aio.so.1.0 00:03:33.519 SO libspdk_sock_posix.so.6.0 00:03:33.519 LIB libspdk_blobfs_bdev.a 00:03:33.519 CC module/bdev/error/vbdev_error_rpc.o 00:03:33.519 SO libspdk_blobfs_bdev.so.6.0 00:03:33.519 SYMLINK libspdk_fsdev_aio.so 00:03:33.519 SYMLINK libspdk_sock_posix.so 00:03:33.519 CC module/bdev/null/bdev_null.o 00:03:33.519 LIB libspdk_bdev_gpt.a 00:03:33.519 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:33.519 SYMLINK libspdk_blobfs_bdev.so 00:03:33.519 SO libspdk_bdev_gpt.so.6.0 00:03:33.779 CC module/bdev/nvme/bdev_nvme.o 00:03:33.779 LIB libspdk_bdev_error.a 00:03:33.779 SYMLINK libspdk_bdev_gpt.so 00:03:33.779 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:33.779 SO libspdk_bdev_error.so.6.0 00:03:33.779 CC module/bdev/passthru/vbdev_passthru.o 00:03:33.779 CC module/bdev/raid/bdev_raid.o 00:03:33.779 CC module/bdev/split/vbdev_split.o 00:03:33.779 LIB libspdk_bdev_delay.a 00:03:33.779 SYMLINK libspdk_bdev_error.so 00:03:33.779 CC module/bdev/split/vbdev_split_rpc.o 00:03:33.779 LIB libspdk_bdev_malloc.a 00:03:33.779 SO libspdk_bdev_delay.so.6.0 00:03:33.779 SO libspdk_bdev_malloc.so.6.0 00:03:33.779 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:33.779 SYMLINK libspdk_bdev_delay.so 00:03:33.779 CC module/bdev/raid/bdev_raid_rpc.o 00:03:33.779 CC module/bdev/null/bdev_null_rpc.o 00:03:33.779 SYMLINK libspdk_bdev_malloc.so 00:03:33.779 CC module/bdev/raid/bdev_raid_sb.o 00:03:33.779 CC module/bdev/raid/raid0.o 00:03:34.039 LIB libspdk_bdev_split.a 00:03:34.039 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:34.039 LIB libspdk_bdev_null.a 00:03:34.039 SO libspdk_bdev_split.so.6.0 00:03:34.039 SO libspdk_bdev_null.so.6.0 00:03:34.039 SYMLINK libspdk_bdev_split.so 00:03:34.039 SYMLINK libspdk_bdev_null.so 00:03:34.039 CC module/bdev/raid/raid1.o 00:03:34.039 LIB libspdk_bdev_passthru.a 00:03:34.039 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:34.039 SO libspdk_bdev_passthru.so.6.0 00:03:34.039 CC module/bdev/xnvme/bdev_xnvme.o 00:03:34.039 LIB libspdk_bdev_lvol.a 00:03:34.039 SYMLINK libspdk_bdev_passthru.so 00:03:34.297 CC module/bdev/aio/bdev_aio.o 00:03:34.298 SO libspdk_bdev_lvol.so.6.0 00:03:34.298 CC module/bdev/nvme/nvme_rpc.o 00:03:34.298 SYMLINK libspdk_bdev_lvol.so 00:03:34.298 CC module/bdev/nvme/bdev_mdns_client.o 00:03:34.298 CC module/bdev/ftl/bdev_ftl.o 00:03:34.298 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:34.298 CC module/bdev/iscsi/bdev_iscsi.o 00:03:34.298 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:34.298 CC module/bdev/nvme/vbdev_opal.o 00:03:34.557 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:34.557 CC module/bdev/raid/concat.o 00:03:34.557 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:34.557 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:34.557 CC module/bdev/aio/bdev_aio_rpc.o 00:03:34.557 LIB libspdk_bdev_ftl.a 00:03:34.557 SO libspdk_bdev_ftl.so.6.0 00:03:34.557 LIB libspdk_bdev_xnvme.a 00:03:34.557 LIB libspdk_bdev_zone_block.a 00:03:34.557 SO libspdk_bdev_xnvme.so.3.0 00:03:34.557 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:34.557 SYMLINK libspdk_bdev_ftl.so 00:03:34.557 SO libspdk_bdev_zone_block.so.6.0 00:03:34.557 LIB libspdk_bdev_aio.a 00:03:34.557 LIB libspdk_bdev_iscsi.a 00:03:34.557 SYMLINK libspdk_bdev_xnvme.so 00:03:34.557 SO libspdk_bdev_aio.so.6.0 00:03:34.557 SO libspdk_bdev_iscsi.so.6.0 00:03:34.557 SYMLINK libspdk_bdev_zone_block.so 00:03:34.818 SYMLINK libspdk_bdev_aio.so 00:03:34.818 SYMLINK libspdk_bdev_iscsi.so 00:03:34.818 LIB libspdk_bdev_raid.a 00:03:34.818 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:34.818 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:34.818 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:34.818 SO libspdk_bdev_raid.so.6.0 00:03:34.818 SYMLINK libspdk_bdev_raid.so 00:03:35.079 LIB libspdk_bdev_virtio.a 00:03:35.340 SO libspdk_bdev_virtio.so.6.0 00:03:35.340 SYMLINK libspdk_bdev_virtio.so 00:03:35.602 LIB libspdk_bdev_nvme.a 00:03:35.863 SO libspdk_bdev_nvme.so.7.0 00:03:35.863 SYMLINK libspdk_bdev_nvme.so 00:03:36.123 CC module/event/subsystems/iobuf/iobuf.o 00:03:36.123 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:36.123 CC module/event/subsystems/vmd/vmd.o 00:03:36.123 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:36.123 CC module/event/subsystems/scheduler/scheduler.o 00:03:36.123 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:36.123 CC module/event/subsystems/keyring/keyring.o 00:03:36.123 CC module/event/subsystems/sock/sock.o 00:03:36.123 CC module/event/subsystems/fsdev/fsdev.o 00:03:36.384 LIB libspdk_event_keyring.a 00:03:36.384 LIB libspdk_event_vmd.a 00:03:36.384 SO libspdk_event_keyring.so.1.0 00:03:36.384 LIB libspdk_event_fsdev.a 00:03:36.384 LIB libspdk_event_vhost_blk.a 00:03:36.384 SO libspdk_event_vmd.so.6.0 00:03:36.384 LIB libspdk_event_sock.a 00:03:36.384 LIB libspdk_event_scheduler.a 00:03:36.384 LIB libspdk_event_iobuf.a 00:03:36.384 SO libspdk_event_sock.so.5.0 00:03:36.384 SO libspdk_event_scheduler.so.4.0 00:03:36.384 SO libspdk_event_vhost_blk.so.3.0 00:03:36.384 SO libspdk_event_fsdev.so.1.0 00:03:36.384 SO libspdk_event_iobuf.so.3.0 00:03:36.384 SYMLINK libspdk_event_keyring.so 00:03:36.384 SYMLINK libspdk_event_vmd.so 00:03:36.384 SYMLINK libspdk_event_sock.so 00:03:36.384 SYMLINK libspdk_event_scheduler.so 00:03:36.384 SYMLINK libspdk_event_vhost_blk.so 00:03:36.384 SYMLINK libspdk_event_fsdev.so 00:03:36.384 SYMLINK libspdk_event_iobuf.so 00:03:36.645 CC module/event/subsystems/accel/accel.o 00:03:36.645 LIB libspdk_event_accel.a 00:03:36.905 SO libspdk_event_accel.so.6.0 00:03:36.905 SYMLINK libspdk_event_accel.so 00:03:37.167 CC module/event/subsystems/bdev/bdev.o 00:03:37.167 LIB libspdk_event_bdev.a 00:03:37.167 SO libspdk_event_bdev.so.6.0 00:03:37.429 SYMLINK libspdk_event_bdev.so 00:03:37.429 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:37.429 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:37.429 CC module/event/subsystems/nbd/nbd.o 00:03:37.429 CC module/event/subsystems/scsi/scsi.o 00:03:37.429 CC module/event/subsystems/ublk/ublk.o 00:03:37.690 LIB libspdk_event_ublk.a 00:03:37.690 LIB libspdk_event_scsi.a 00:03:37.690 SO libspdk_event_ublk.so.3.0 00:03:37.690 LIB libspdk_event_nbd.a 00:03:37.690 SO libspdk_event_scsi.so.6.0 00:03:37.690 SO libspdk_event_nbd.so.6.0 00:03:37.690 SYMLINK libspdk_event_ublk.so 00:03:37.690 LIB libspdk_event_nvmf.a 00:03:37.690 SYMLINK libspdk_event_scsi.so 00:03:37.690 SYMLINK libspdk_event_nbd.so 00:03:37.690 SO libspdk_event_nvmf.so.6.0 00:03:37.690 SYMLINK libspdk_event_nvmf.so 00:03:37.690 CC module/event/subsystems/iscsi/iscsi.o 00:03:37.950 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.950 LIB libspdk_event_iscsi.a 00:03:37.950 LIB libspdk_event_vhost_scsi.a 00:03:37.950 SO libspdk_event_iscsi.so.6.0 00:03:37.950 SO libspdk_event_vhost_scsi.so.3.0 00:03:37.950 SYMLINK libspdk_event_iscsi.so 00:03:37.950 SYMLINK libspdk_event_vhost_scsi.so 00:03:38.211 SO libspdk.so.6.0 00:03:38.211 SYMLINK libspdk.so 00:03:38.472 TEST_HEADER include/spdk/accel.h 00:03:38.472 TEST_HEADER include/spdk/accel_module.h 00:03:38.472 TEST_HEADER include/spdk/assert.h 00:03:38.472 TEST_HEADER include/spdk/barrier.h 00:03:38.472 TEST_HEADER include/spdk/base64.h 00:03:38.472 TEST_HEADER include/spdk/bdev.h 00:03:38.472 TEST_HEADER include/spdk/bdev_module.h 00:03:38.472 TEST_HEADER include/spdk/bdev_zone.h 00:03:38.472 TEST_HEADER include/spdk/bit_array.h 00:03:38.472 TEST_HEADER include/spdk/bit_pool.h 00:03:38.472 TEST_HEADER include/spdk/blob_bdev.h 00:03:38.472 CC test/rpc_client/rpc_client_test.o 00:03:38.472 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:38.472 TEST_HEADER include/spdk/blobfs.h 00:03:38.472 CXX app/trace/trace.o 00:03:38.472 TEST_HEADER include/spdk/blob.h 00:03:38.472 TEST_HEADER include/spdk/conf.h 00:03:38.472 TEST_HEADER include/spdk/config.h 00:03:38.472 TEST_HEADER include/spdk/cpuset.h 00:03:38.472 TEST_HEADER include/spdk/crc16.h 00:03:38.472 TEST_HEADER include/spdk/crc32.h 00:03:38.472 TEST_HEADER include/spdk/crc64.h 00:03:38.472 TEST_HEADER include/spdk/dif.h 00:03:38.472 TEST_HEADER include/spdk/dma.h 00:03:38.472 TEST_HEADER include/spdk/endian.h 00:03:38.472 TEST_HEADER include/spdk/env_dpdk.h 00:03:38.472 TEST_HEADER include/spdk/env.h 00:03:38.472 TEST_HEADER include/spdk/event.h 00:03:38.472 TEST_HEADER include/spdk/fd_group.h 00:03:38.472 TEST_HEADER include/spdk/fd.h 00:03:38.472 TEST_HEADER include/spdk/file.h 00:03:38.472 TEST_HEADER include/spdk/fsdev.h 00:03:38.472 TEST_HEADER include/spdk/fsdev_module.h 00:03:38.472 TEST_HEADER include/spdk/ftl.h 00:03:38.472 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:38.472 TEST_HEADER include/spdk/gpt_spec.h 00:03:38.472 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:38.472 TEST_HEADER include/spdk/hexlify.h 00:03:38.472 TEST_HEADER include/spdk/histogram_data.h 00:03:38.472 TEST_HEADER include/spdk/idxd.h 00:03:38.472 TEST_HEADER include/spdk/idxd_spec.h 00:03:38.472 TEST_HEADER include/spdk/init.h 00:03:38.472 TEST_HEADER include/spdk/ioat.h 00:03:38.472 TEST_HEADER include/spdk/ioat_spec.h 00:03:38.472 TEST_HEADER include/spdk/iscsi_spec.h 00:03:38.472 TEST_HEADER include/spdk/json.h 00:03:38.472 TEST_HEADER include/spdk/jsonrpc.h 00:03:38.472 TEST_HEADER include/spdk/keyring.h 00:03:38.472 TEST_HEADER include/spdk/keyring_module.h 00:03:38.472 TEST_HEADER include/spdk/likely.h 00:03:38.472 TEST_HEADER include/spdk/log.h 00:03:38.472 CC test/thread/poller_perf/poller_perf.o 00:03:38.472 TEST_HEADER include/spdk/lvol.h 00:03:38.472 TEST_HEADER include/spdk/md5.h 00:03:38.472 CC examples/util/zipf/zipf.o 00:03:38.472 TEST_HEADER include/spdk/memory.h 00:03:38.472 TEST_HEADER include/spdk/mmio.h 00:03:38.472 TEST_HEADER include/spdk/nbd.h 00:03:38.472 TEST_HEADER include/spdk/net.h 00:03:38.472 TEST_HEADER include/spdk/notify.h 00:03:38.472 TEST_HEADER include/spdk/nvme.h 00:03:38.472 TEST_HEADER include/spdk/nvme_intel.h 00:03:38.472 CC examples/ioat/perf/perf.o 00:03:38.472 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:38.472 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:38.472 TEST_HEADER include/spdk/nvme_spec.h 00:03:38.472 TEST_HEADER include/spdk/nvme_zns.h 00:03:38.472 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:38.472 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:38.472 TEST_HEADER include/spdk/nvmf.h 00:03:38.472 TEST_HEADER include/spdk/nvmf_spec.h 00:03:38.473 TEST_HEADER include/spdk/nvmf_transport.h 00:03:38.473 TEST_HEADER include/spdk/opal.h 00:03:38.473 TEST_HEADER include/spdk/opal_spec.h 00:03:38.473 TEST_HEADER include/spdk/pci_ids.h 00:03:38.473 TEST_HEADER include/spdk/pipe.h 00:03:38.473 TEST_HEADER include/spdk/queue.h 00:03:38.473 TEST_HEADER include/spdk/reduce.h 00:03:38.473 TEST_HEADER include/spdk/rpc.h 00:03:38.473 TEST_HEADER include/spdk/scheduler.h 00:03:38.473 CC test/dma/test_dma/test_dma.o 00:03:38.473 TEST_HEADER include/spdk/scsi.h 00:03:38.473 CC test/app/bdev_svc/bdev_svc.o 00:03:38.473 TEST_HEADER include/spdk/scsi_spec.h 00:03:38.473 TEST_HEADER include/spdk/sock.h 00:03:38.473 TEST_HEADER include/spdk/stdinc.h 00:03:38.473 TEST_HEADER include/spdk/string.h 00:03:38.473 TEST_HEADER include/spdk/thread.h 00:03:38.473 CC test/env/mem_callbacks/mem_callbacks.o 00:03:38.473 TEST_HEADER include/spdk/trace.h 00:03:38.473 TEST_HEADER include/spdk/trace_parser.h 00:03:38.473 TEST_HEADER include/spdk/tree.h 00:03:38.473 TEST_HEADER include/spdk/ublk.h 00:03:38.473 TEST_HEADER include/spdk/util.h 00:03:38.473 TEST_HEADER include/spdk/uuid.h 00:03:38.473 TEST_HEADER include/spdk/version.h 00:03:38.473 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:38.473 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:38.473 TEST_HEADER include/spdk/vhost.h 00:03:38.473 TEST_HEADER include/spdk/vmd.h 00:03:38.473 TEST_HEADER include/spdk/xor.h 00:03:38.473 TEST_HEADER include/spdk/zipf.h 00:03:38.473 CXX test/cpp_headers/accel.o 00:03:38.473 LINK rpc_client_test 00:03:38.473 LINK interrupt_tgt 00:03:38.473 LINK poller_perf 00:03:38.473 LINK zipf 00:03:38.734 CXX test/cpp_headers/accel_module.o 00:03:38.734 LINK ioat_perf 00:03:38.734 LINK bdev_svc 00:03:38.734 CXX test/cpp_headers/assert.o 00:03:38.734 LINK spdk_trace 00:03:38.734 CC test/app/jsoncat/jsoncat.o 00:03:38.734 CC examples/ioat/verify/verify.o 00:03:38.734 CC test/app/histogram_perf/histogram_perf.o 00:03:38.734 CXX test/cpp_headers/barrier.o 00:03:38.734 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:38.734 CC test/app/stub/stub.o 00:03:38.734 LINK test_dma 00:03:38.993 LINK jsoncat 00:03:38.993 CC app/trace_record/trace_record.o 00:03:38.993 LINK histogram_perf 00:03:38.993 CXX test/cpp_headers/base64.o 00:03:38.993 CC test/event/event_perf/event_perf.o 00:03:38.993 LINK verify 00:03:38.993 LINK mem_callbacks 00:03:38.993 LINK stub 00:03:38.993 LINK event_perf 00:03:38.993 CC test/event/reactor/reactor.o 00:03:38.993 CXX test/cpp_headers/bdev.o 00:03:38.993 CC app/nvmf_tgt/nvmf_main.o 00:03:38.993 LINK nvme_fuzz 00:03:38.993 CXX test/cpp_headers/bdev_module.o 00:03:39.254 CC app/iscsi_tgt/iscsi_tgt.o 00:03:39.254 LINK spdk_trace_record 00:03:39.254 CC test/env/vtophys/vtophys.o 00:03:39.254 CXX test/cpp_headers/bdev_zone.o 00:03:39.254 LINK reactor 00:03:39.254 CXX test/cpp_headers/bit_array.o 00:03:39.254 CC examples/thread/thread/thread_ex.o 00:03:39.254 LINK nvmf_tgt 00:03:39.254 LINK vtophys 00:03:39.254 LINK iscsi_tgt 00:03:39.254 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:39.254 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:39.254 CXX test/cpp_headers/bit_pool.o 00:03:39.254 CC test/event/reactor_perf/reactor_perf.o 00:03:39.254 CC test/env/memory/memory_ut.o 00:03:39.254 CC test/env/pci/pci_ut.o 00:03:39.515 CXX test/cpp_headers/blob_bdev.o 00:03:39.515 CXX test/cpp_headers/blobfs_bdev.o 00:03:39.515 LINK env_dpdk_post_init 00:03:39.515 LINK thread 00:03:39.515 LINK reactor_perf 00:03:39.515 CC app/spdk_lspci/spdk_lspci.o 00:03:39.515 CXX test/cpp_headers/blobfs.o 00:03:39.515 CC app/spdk_tgt/spdk_tgt.o 00:03:39.515 CC app/spdk_nvme_perf/perf.o 00:03:39.776 CC app/spdk_nvme_identify/identify.o 00:03:39.776 CC test/event/app_repeat/app_repeat.o 00:03:39.776 LINK spdk_lspci 00:03:39.776 CXX test/cpp_headers/blob.o 00:03:39.776 LINK pci_ut 00:03:39.776 CC examples/sock/hello_world/hello_sock.o 00:03:39.776 LINK spdk_tgt 00:03:39.776 LINK app_repeat 00:03:39.776 CXX test/cpp_headers/conf.o 00:03:39.776 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:40.038 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:40.038 CXX test/cpp_headers/config.o 00:03:40.038 CXX test/cpp_headers/cpuset.o 00:03:40.038 LINK hello_sock 00:03:40.038 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.038 CXX test/cpp_headers/crc16.o 00:03:40.038 CC test/event/scheduler/scheduler.o 00:03:40.038 CXX test/cpp_headers/crc32.o 00:03:40.038 CC app/spdk_top/spdk_top.o 00:03:40.299 LINK memory_ut 00:03:40.299 LINK spdk_nvme_discover 00:03:40.299 LINK scheduler 00:03:40.299 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.299 CXX test/cpp_headers/crc64.o 00:03:40.299 LINK spdk_nvme_identify 00:03:40.299 LINK spdk_nvme_perf 00:03:40.299 LINK lsvmd 00:03:40.299 LINK vhost_fuzz 00:03:40.299 CC examples/vmd/led/led.o 00:03:40.299 CXX test/cpp_headers/dif.o 00:03:40.561 CC examples/idxd/perf/perf.o 00:03:40.561 CXX test/cpp_headers/dma.o 00:03:40.561 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:40.561 LINK led 00:03:40.561 CC examples/accel/perf/accel_perf.o 00:03:40.561 CXX test/cpp_headers/endian.o 00:03:40.561 CC test/accel/dif/dif.o 00:03:40.561 CC test/blobfs/mkfs/mkfs.o 00:03:40.561 CXX test/cpp_headers/env_dpdk.o 00:03:40.561 CC app/vhost/vhost.o 00:03:40.561 CXX test/cpp_headers/env.o 00:03:40.827 LINK idxd_perf 00:03:40.827 LINK mkfs 00:03:40.827 CXX test/cpp_headers/event.o 00:03:40.827 LINK vhost 00:03:40.827 LINK hello_fsdev 00:03:40.827 CXX test/cpp_headers/fd_group.o 00:03:40.827 CC test/nvme/aer/aer.o 00:03:41.088 CC test/nvme/reset/reset.o 00:03:41.088 CC test/lvol/esnap/esnap.o 00:03:41.088 CC test/nvme/sgl/sgl.o 00:03:41.088 CC test/nvme/e2edp/nvme_dp.o 00:03:41.088 LINK accel_perf 00:03:41.088 LINK iscsi_fuzz 00:03:41.088 CXX test/cpp_headers/fd.o 00:03:41.088 LINK spdk_top 00:03:41.088 CXX test/cpp_headers/file.o 00:03:41.088 LINK reset 00:03:41.088 LINK aer 00:03:41.088 LINK nvme_dp 00:03:41.350 CXX test/cpp_headers/fsdev.o 00:03:41.350 LINK sgl 00:03:41.350 CC app/spdk_dd/spdk_dd.o 00:03:41.350 LINK dif 00:03:41.350 CC examples/blob/hello_world/hello_blob.o 00:03:41.350 CC examples/nvme/hello_world/hello_world.o 00:03:41.350 CXX test/cpp_headers/fsdev_module.o 00:03:41.350 CC test/nvme/overhead/overhead.o 00:03:41.350 CC examples/blob/cli/blobcli.o 00:03:41.350 CXX test/cpp_headers/ftl.o 00:03:41.350 CXX test/cpp_headers/fuse_dispatcher.o 00:03:41.610 CC test/nvme/err_injection/err_injection.o 00:03:41.610 LINK hello_blob 00:03:41.610 LINK hello_world 00:03:41.610 CC examples/bdev/hello_world/hello_bdev.o 00:03:41.610 CXX test/cpp_headers/gpt_spec.o 00:03:41.610 LINK spdk_dd 00:03:41.610 LINK overhead 00:03:41.610 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.610 LINK err_injection 00:03:41.610 CXX test/cpp_headers/hexlify.o 00:03:41.610 CXX test/cpp_headers/histogram_data.o 00:03:41.610 LINK hello_bdev 00:03:41.610 CC examples/nvme/reconnect/reconnect.o 00:03:41.610 LINK blobcli 00:03:41.869 CXX test/cpp_headers/idxd.o 00:03:41.869 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:41.869 CC app/fio/nvme/fio_plugin.o 00:03:41.869 CC test/nvme/reserve/reserve.o 00:03:41.869 CC test/nvme/startup/startup.o 00:03:41.869 CC examples/nvme/arbitration/arbitration.o 00:03:41.869 CXX test/cpp_headers/idxd_spec.o 00:03:41.869 LINK startup 00:03:41.869 LINK reserve 00:03:42.127 CC test/bdev/bdevio/bdevio.o 00:03:42.127 LINK reconnect 00:03:42.127 CXX test/cpp_headers/init.o 00:03:42.127 CXX test/cpp_headers/ioat.o 00:03:42.127 CC test/nvme/simple_copy/simple_copy.o 00:03:42.127 LINK nvme_manage 00:03:42.127 CXX test/cpp_headers/ioat_spec.o 00:03:42.127 LINK arbitration 00:03:42.127 LINK spdk_nvme 00:03:42.127 LINK bdevperf 00:03:42.127 CC examples/nvme/hotplug/hotplug.o 00:03:42.127 CC app/fio/bdev/fio_plugin.o 00:03:42.386 LINK simple_copy 00:03:42.386 CXX test/cpp_headers/iscsi_spec.o 00:03:42.386 CC test/nvme/connect_stress/connect_stress.o 00:03:42.386 CC examples/nvme/abort/abort.o 00:03:42.386 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:42.386 LINK bdevio 00:03:42.386 CXX test/cpp_headers/json.o 00:03:42.386 LINK hotplug 00:03:42.386 CC test/nvme/boot_partition/boot_partition.o 00:03:42.386 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:42.386 LINK connect_stress 00:03:42.386 LINK cmb_copy 00:03:42.644 CXX test/cpp_headers/jsonrpc.o 00:03:42.644 CXX test/cpp_headers/keyring.o 00:03:42.644 LINK boot_partition 00:03:42.644 CC test/nvme/compliance/nvme_compliance.o 00:03:42.644 LINK pmr_persistence 00:03:42.644 CXX test/cpp_headers/keyring_module.o 00:03:42.644 CXX test/cpp_headers/likely.o 00:03:42.644 LINK abort 00:03:42.644 CC test/nvme/fused_ordering/fused_ordering.o 00:03:42.644 CXX test/cpp_headers/log.o 00:03:42.644 CXX test/cpp_headers/lvol.o 00:03:42.644 LINK spdk_bdev 00:03:42.644 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.644 CXX test/cpp_headers/md5.o 00:03:42.903 CXX test/cpp_headers/memory.o 00:03:42.903 CC test/nvme/fdp/fdp.o 00:03:42.903 CXX test/cpp_headers/mmio.o 00:03:42.903 CXX test/cpp_headers/nbd.o 00:03:42.903 CXX test/cpp_headers/net.o 00:03:42.903 LINK fused_ordering 00:03:42.903 LINK nvme_compliance 00:03:42.903 CXX test/cpp_headers/notify.o 00:03:42.903 CC examples/nvmf/nvmf/nvmf.o 00:03:42.903 LINK doorbell_aers 00:03:42.903 CXX test/cpp_headers/nvme.o 00:03:42.903 CXX test/cpp_headers/nvme_intel.o 00:03:42.903 CXX test/cpp_headers/nvme_ocssd.o 00:03:42.903 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:42.903 CXX test/cpp_headers/nvme_spec.o 00:03:43.161 CXX test/cpp_headers/nvme_zns.o 00:03:43.161 CXX test/cpp_headers/nvmf_cmd.o 00:03:43.161 CC test/nvme/cuse/cuse.o 00:03:43.161 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:43.161 LINK fdp 00:03:43.161 CXX test/cpp_headers/nvmf.o 00:03:43.161 CXX test/cpp_headers/nvmf_spec.o 00:03:43.161 CXX test/cpp_headers/nvmf_transport.o 00:03:43.161 CXX test/cpp_headers/opal.o 00:03:43.161 LINK nvmf 00:03:43.161 CXX test/cpp_headers/opal_spec.o 00:03:43.161 CXX test/cpp_headers/pci_ids.o 00:03:43.161 CXX test/cpp_headers/pipe.o 00:03:43.161 CXX test/cpp_headers/queue.o 00:03:43.161 CXX test/cpp_headers/reduce.o 00:03:43.161 CXX test/cpp_headers/rpc.o 00:03:43.420 CXX test/cpp_headers/scheduler.o 00:03:43.420 CXX test/cpp_headers/scsi.o 00:03:43.420 CXX test/cpp_headers/scsi_spec.o 00:03:43.420 CXX test/cpp_headers/sock.o 00:03:43.420 CXX test/cpp_headers/stdinc.o 00:03:43.420 CXX test/cpp_headers/string.o 00:03:43.420 CXX test/cpp_headers/thread.o 00:03:43.420 CXX test/cpp_headers/trace.o 00:03:43.420 CXX test/cpp_headers/trace_parser.o 00:03:43.420 CXX test/cpp_headers/tree.o 00:03:43.420 CXX test/cpp_headers/ublk.o 00:03:43.420 CXX test/cpp_headers/util.o 00:03:43.420 CXX test/cpp_headers/uuid.o 00:03:43.420 CXX test/cpp_headers/version.o 00:03:43.420 CXX test/cpp_headers/vfio_user_pci.o 00:03:43.420 CXX test/cpp_headers/vfio_user_spec.o 00:03:43.420 CXX test/cpp_headers/vhost.o 00:03:43.420 CXX test/cpp_headers/vmd.o 00:03:43.420 CXX test/cpp_headers/xor.o 00:03:43.420 CXX test/cpp_headers/zipf.o 00:03:43.990 LINK cuse 00:03:45.380 LINK esnap 00:03:45.641 00:03:45.641 real 1m2.148s 00:03:45.641 user 5m50.833s 00:03:45.641 sys 1m2.122s 00:03:45.641 11:18:30 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:45.641 11:18:30 make -- common/autotest_common.sh@10 -- $ set +x 00:03:45.641 ************************************ 00:03:45.641 END TEST make 00:03:45.641 ************************************ 00:03:45.641 11:18:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:45.641 11:18:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:45.641 11:18:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:45.641 11:18:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.641 11:18:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:45.641 11:18:30 -- pm/common@44 -- $ pid=5070 00:03:45.641 11:18:30 -- pm/common@50 -- $ kill -TERM 5070 00:03:45.641 11:18:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.641 11:18:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:45.641 11:18:30 -- pm/common@44 -- $ pid=5072 00:03:45.641 11:18:30 -- pm/common@50 -- $ kill -TERM 5072 00:03:45.641 11:18:30 -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:03:45.641 11:18:30 -- common/autotest_common.sh@1689 -- # lcov --version 00:03:45.641 11:18:30 -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:03:45.903 11:18:30 -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:03:45.903 11:18:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:45.903 11:18:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:45.903 11:18:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:45.903 11:18:30 -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.903 11:18:30 -- scripts/common.sh@336 -- # read -ra ver1 00:03:45.903 11:18:30 -- scripts/common.sh@337 -- # IFS=.-: 00:03:45.903 11:18:30 -- scripts/common.sh@337 -- # read -ra ver2 00:03:45.903 11:18:30 -- scripts/common.sh@338 -- # local 'op=<' 00:03:45.903 11:18:30 -- scripts/common.sh@340 -- # ver1_l=2 00:03:45.903 11:18:30 -- scripts/common.sh@341 -- # ver2_l=1 00:03:45.903 11:18:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:45.903 11:18:30 -- scripts/common.sh@344 -- # case "$op" in 00:03:45.903 11:18:30 -- scripts/common.sh@345 -- # : 1 00:03:45.903 11:18:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:45.903 11:18:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.903 11:18:30 -- scripts/common.sh@365 -- # decimal 1 00:03:45.903 11:18:30 -- scripts/common.sh@353 -- # local d=1 00:03:45.903 11:18:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.903 11:18:30 -- scripts/common.sh@355 -- # echo 1 00:03:45.903 11:18:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:45.903 11:18:30 -- scripts/common.sh@366 -- # decimal 2 00:03:45.903 11:18:30 -- scripts/common.sh@353 -- # local d=2 00:03:45.903 11:18:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.903 11:18:30 -- scripts/common.sh@355 -- # echo 2 00:03:45.903 11:18:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:45.903 11:18:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:45.903 11:18:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:45.903 11:18:30 -- scripts/common.sh@368 -- # return 0 00:03:45.903 11:18:30 -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.903 11:18:30 -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:03:45.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.903 --rc genhtml_branch_coverage=1 00:03:45.903 --rc genhtml_function_coverage=1 00:03:45.903 --rc genhtml_legend=1 00:03:45.903 --rc geninfo_all_blocks=1 00:03:45.903 --rc geninfo_unexecuted_blocks=1 00:03:45.903 00:03:45.903 ' 00:03:45.903 11:18:30 -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:03:45.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.903 --rc genhtml_branch_coverage=1 00:03:45.903 --rc genhtml_function_coverage=1 00:03:45.903 --rc genhtml_legend=1 00:03:45.903 --rc geninfo_all_blocks=1 00:03:45.903 --rc geninfo_unexecuted_blocks=1 00:03:45.903 00:03:45.903 ' 00:03:45.903 11:18:30 -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:03:45.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.903 --rc genhtml_branch_coverage=1 00:03:45.903 --rc genhtml_function_coverage=1 00:03:45.903 --rc genhtml_legend=1 00:03:45.903 --rc geninfo_all_blocks=1 00:03:45.903 --rc geninfo_unexecuted_blocks=1 00:03:45.903 00:03:45.903 ' 00:03:45.903 11:18:30 -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:03:45.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.903 --rc genhtml_branch_coverage=1 00:03:45.903 --rc genhtml_function_coverage=1 00:03:45.903 --rc genhtml_legend=1 00:03:45.903 --rc geninfo_all_blocks=1 00:03:45.903 --rc geninfo_unexecuted_blocks=1 00:03:45.903 00:03:45.903 ' 00:03:45.904 11:18:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:45.904 11:18:30 -- nvmf/common.sh@7 -- # uname -s 00:03:45.904 11:18:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:45.904 11:18:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:45.904 11:18:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:45.904 11:18:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:45.904 11:18:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:45.904 11:18:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:45.904 11:18:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:45.904 11:18:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:45.904 11:18:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:45.904 11:18:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.904 11:18:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d9d229ec-5263-47e1-b343-3b59e9c68251 00:03:45.904 11:18:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=d9d229ec-5263-47e1-b343-3b59e9c68251 00:03:45.904 11:18:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.904 11:18:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.904 11:18:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:45.904 11:18:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:45.904 11:18:30 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:45.904 11:18:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:45.904 11:18:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.904 11:18:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.904 11:18:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.904 11:18:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.904 11:18:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.904 11:18:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.904 11:18:30 -- paths/export.sh@5 -- # export PATH 00:03:45.904 11:18:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.904 11:18:30 -- nvmf/common.sh@51 -- # : 0 00:03:45.904 11:18:30 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:45.904 11:18:30 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:45.904 11:18:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:45.904 11:18:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.904 11:18:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.904 11:18:30 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:45.904 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:45.904 11:18:30 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:45.904 11:18:30 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:45.904 11:18:30 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:45.904 11:18:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:45.904 11:18:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:45.904 11:18:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:45.904 11:18:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:45.904 11:18:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.904 11:18:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:45.904 11:18:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.904 11:18:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:45.904 11:18:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:45.904 11:18:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:45.904 11:18:31 -- spdk/autotest.sh@48 -- # udevadm_pid=54189 00:03:45.904 11:18:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:45.904 11:18:31 -- pm/common@17 -- # local monitor 00:03:45.904 11:18:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.904 11:18:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:45.904 11:18:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.904 11:18:31 -- pm/common@25 -- # sleep 1 00:03:45.904 11:18:31 -- pm/common@21 -- # date +%s 00:03:45.904 11:18:31 -- pm/common@21 -- # date +%s 00:03:45.904 11:18:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730027911 00:03:45.904 11:18:31 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730027911 00:03:45.904 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730027911_collect-vmstat.pm.log 00:03:45.904 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730027911_collect-cpu-load.pm.log 00:03:46.849 11:18:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:46.849 11:18:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:46.849 11:18:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:46.849 11:18:32 -- common/autotest_common.sh@10 -- # set +x 00:03:46.849 11:18:32 -- spdk/autotest.sh@59 -- # create_test_list 00:03:46.849 11:18:32 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:46.849 11:18:32 -- common/autotest_common.sh@10 -- # set +x 00:03:46.849 11:18:32 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:46.849 11:18:32 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:46.849 11:18:32 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:46.849 11:18:32 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:46.849 11:18:32 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:46.849 11:18:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:46.849 11:18:32 -- common/autotest_common.sh@1453 -- # uname 00:03:46.849 11:18:32 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:03:46.849 11:18:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:46.849 11:18:32 -- common/autotest_common.sh@1473 -- # uname 00:03:46.849 11:18:32 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:03:46.849 11:18:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:46.849 11:18:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:47.111 lcov: LCOV version 1.15 00:03:47.111 11:18:32 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:02.032 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:02.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:16.978 11:19:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:16.978 11:19:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.978 11:19:01 -- common/autotest_common.sh@10 -- # set +x 00:04:16.978 11:19:01 -- spdk/autotest.sh@78 -- # rm -f 00:04:16.978 11:19:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.978 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:16.978 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:16.978 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:16.978 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:16.978 11:19:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:16.978 11:19:02 -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:04:16.978 11:19:02 -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:04:16.978 11:19:02 -- common/autotest_common.sh@1654 -- # local nvme bdf 00:04:16.978 11:19:02 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:16.978 11:19:02 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:04:16.978 11:19:02 -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:04:16.978 11:19:02 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.978 11:19:02 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:16.978 11:19:02 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:16.978 11:19:02 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:04:16.978 11:19:02 -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:04:16.978 11:19:02 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:16.978 11:19:02 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:16.978 11:19:02 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:16.978 11:19:02 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:04:16.978 11:19:02 -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:04:16.978 11:19:02 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:16.978 11:19:02 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:16.978 11:19:02 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:16.978 11:19:02 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n2 00:04:16.978 11:19:02 -- common/autotest_common.sh@1646 -- # local device=nvme2n2 00:04:16.978 11:19:02 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:16.978 11:19:02 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:16.978 11:19:02 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:16.978 11:19:02 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n3 00:04:16.978 11:19:02 -- common/autotest_common.sh@1646 -- # local device=nvme2n3 00:04:16.978 11:19:02 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:16.978 11:19:02 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:16.978 11:19:02 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:16.978 11:19:02 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3c3n1 00:04:16.978 11:19:02 -- common/autotest_common.sh@1646 -- # local device=nvme3c3n1 00:04:16.978 11:19:02 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:16.978 11:19:02 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:16.979 11:19:02 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:16.979 11:19:02 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:04:16.979 11:19:02 -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:04:16.979 11:19:02 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:16.979 11:19:02 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:16.979 11:19:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:16.979 11:19:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.979 11:19:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.979 11:19:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:16.979 11:19:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:16.979 11:19:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.979 No valid GPT data, bailing 00:04:16.979 11:19:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.979 11:19:02 -- scripts/common.sh@394 -- # pt= 00:04:16.979 11:19:02 -- scripts/common.sh@395 -- # return 1 00:04:16.979 11:19:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.979 1+0 records in 00:04:16.979 1+0 records out 00:04:16.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232204 s, 45.2 MB/s 00:04:16.979 11:19:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.979 11:19:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.979 11:19:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:16.979 11:19:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:16.979 11:19:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:17.237 No valid GPT data, bailing 00:04:17.238 11:19:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:17.238 11:19:02 -- scripts/common.sh@394 -- # pt= 00:04:17.238 11:19:02 -- scripts/common.sh@395 -- # return 1 00:04:17.238 11:19:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:17.238 1+0 records in 00:04:17.238 1+0 records out 00:04:17.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00522631 s, 201 MB/s 00:04:17.238 11:19:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.238 11:19:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.238 11:19:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:17.238 11:19:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:17.238 11:19:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:17.238 No valid GPT data, bailing 00:04:17.238 11:19:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:17.238 11:19:02 -- scripts/common.sh@394 -- # pt= 00:04:17.238 11:19:02 -- scripts/common.sh@395 -- # return 1 00:04:17.238 11:19:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:17.238 1+0 records in 00:04:17.238 1+0 records out 00:04:17.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00409227 s, 256 MB/s 00:04:17.238 11:19:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.238 11:19:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.238 11:19:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:17.238 11:19:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:17.238 11:19:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:17.238 No valid GPT data, bailing 00:04:17.238 11:19:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:17.238 11:19:02 -- scripts/common.sh@394 -- # pt= 00:04:17.238 11:19:02 -- scripts/common.sh@395 -- # return 1 00:04:17.238 11:19:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:17.238 1+0 records in 00:04:17.238 1+0 records out 00:04:17.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054792 s, 191 MB/s 00:04:17.238 11:19:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.238 11:19:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.238 11:19:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:17.238 11:19:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:17.238 11:19:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:17.497 No valid GPT data, bailing 00:04:17.497 11:19:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:17.497 11:19:02 -- scripts/common.sh@394 -- # pt= 00:04:17.497 11:19:02 -- scripts/common.sh@395 -- # return 1 00:04:17.497 11:19:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:17.497 1+0 records in 00:04:17.497 1+0 records out 00:04:17.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00533699 s, 196 MB/s 00:04:17.497 11:19:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.497 11:19:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.497 11:19:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:17.497 11:19:02 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:17.497 11:19:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:17.497 No valid GPT data, bailing 00:04:17.497 11:19:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:17.497 11:19:02 -- scripts/common.sh@394 -- # pt= 00:04:17.497 11:19:02 -- scripts/common.sh@395 -- # return 1 00:04:17.497 11:19:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:17.497 1+0 records in 00:04:17.497 1+0 records out 00:04:17.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00561298 s, 187 MB/s 00:04:17.497 11:19:02 -- spdk/autotest.sh@105 -- # sync 00:04:17.756 11:19:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:17.756 11:19:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:17.756 11:19:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:19.669 11:19:04 -- spdk/autotest.sh@111 -- # uname -s 00:04:19.669 11:19:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:19.669 11:19:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:19.669 11:19:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:19.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.498 Hugepages 00:04:20.498 node hugesize free / total 00:04:20.498 node0 1048576kB 0 / 0 00:04:20.498 node0 2048kB 0 / 0 00:04:20.498 00:04:20.498 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:20.498 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:20.498 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:20.498 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:20.757 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:20.757 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:20.757 11:19:05 -- spdk/autotest.sh@117 -- # uname -s 00:04:20.757 11:19:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:20.757 11:19:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:20.757 11:19:05 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.583 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.583 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.844 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.844 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.844 11:19:06 -- common/autotest_common.sh@1513 -- # sleep 1 00:04:22.788 11:19:07 -- common/autotest_common.sh@1514 -- # bdfs=() 00:04:22.788 11:19:07 -- common/autotest_common.sh@1514 -- # local bdfs 00:04:22.788 11:19:07 -- common/autotest_common.sh@1516 -- # bdfs=($(get_nvme_bdfs)) 00:04:22.788 11:19:07 -- common/autotest_common.sh@1516 -- # get_nvme_bdfs 00:04:22.788 11:19:07 -- common/autotest_common.sh@1494 -- # bdfs=() 00:04:22.788 11:19:07 -- common/autotest_common.sh@1494 -- # local bdfs 00:04:22.788 11:19:07 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.788 11:19:07 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:04:22.788 11:19:07 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.788 11:19:08 -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:04:22.788 11:19:08 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:22.788 11:19:08 -- common/autotest_common.sh@1518 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.361 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.361 Waiting for block devices as requested 00:04:23.361 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.361 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.622 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.622 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.037 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:29.037 11:19:13 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:29.037 11:19:13 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:29.037 11:19:13 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.037 11:19:13 -- common/autotest_common.sh@1483 -- # grep 0000:00:10.0/nvme/nvme 00:04:29.037 11:19:13 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:29.037 11:19:13 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:29.037 11:19:13 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme1 00:04:29.037 11:19:13 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme1 00:04:29.037 11:19:13 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme1 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme1 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:29.037 11:19:13 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:29.037 11:19:13 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme1 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:29.037 11:19:13 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1539 -- # continue 00:04:29.037 11:19:13 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:29.037 11:19:13 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:29.037 11:19:13 -- common/autotest_common.sh@1483 -- # grep 0000:00:11.0/nvme/nvme 00:04:29.037 11:19:13 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.037 11:19:13 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:29.037 11:19:13 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:29.037 11:19:13 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme0 00:04:29.037 11:19:13 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme0 00:04:29.037 11:19:13 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme0 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme0 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:29.037 11:19:13 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:29.037 11:19:13 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme0 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:29.037 11:19:13 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1539 -- # continue 00:04:29.037 11:19:13 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:29.037 11:19:13 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:29.037 11:19:13 -- common/autotest_common.sh@1483 -- # grep 0000:00:12.0/nvme/nvme 00:04:29.037 11:19:13 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.037 11:19:13 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme2 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:29.037 11:19:13 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:29.037 11:19:13 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:29.037 11:19:13 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:29.037 11:19:13 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:29.037 11:19:13 -- common/autotest_common.sh@1539 -- # continue 00:04:29.037 11:19:13 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:29.037 11:19:13 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:29.037 11:19:14 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.037 11:19:14 -- common/autotest_common.sh@1483 -- # grep 0000:00:13.0/nvme/nvme 00:04:29.037 11:19:14 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:29.037 11:19:14 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:29.037 11:19:14 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:29.037 11:19:14 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme3 00:04:29.037 11:19:14 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme3 00:04:29.037 11:19:14 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme3 ]] 00:04:29.037 11:19:14 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme3 00:04:29.037 11:19:14 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:29.037 11:19:14 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:29.037 11:19:14 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:29.037 11:19:14 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:29.037 11:19:14 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:29.037 11:19:14 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme3 00:04:29.037 11:19:14 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:29.037 11:19:14 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:29.037 11:19:14 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:29.037 11:19:14 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:29.037 11:19:14 -- common/autotest_common.sh@1539 -- # continue 00:04:29.037 11:19:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:29.037 11:19:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:29.037 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:04:29.037 11:19:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:29.037 11:19:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:29.037 11:19:14 -- common/autotest_common.sh@10 -- # set +x 00:04:29.037 11:19:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.863 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.863 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.863 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.863 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.125 11:19:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:30.125 11:19:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:30.125 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:30.125 11:19:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:30.125 11:19:15 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:04:30.125 11:19:15 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:04:30.125 11:19:15 -- common/autotest_common.sh@1559 -- # bdfs=() 00:04:30.125 11:19:15 -- common/autotest_common.sh@1559 -- # _bdfs=() 00:04:30.125 11:19:15 -- common/autotest_common.sh@1559 -- # local bdfs _bdfs 00:04:30.125 11:19:15 -- common/autotest_common.sh@1560 -- # _bdfs=($(get_nvme_bdfs)) 00:04:30.125 11:19:15 -- common/autotest_common.sh@1560 -- # get_nvme_bdfs 00:04:30.125 11:19:15 -- common/autotest_common.sh@1494 -- # bdfs=() 00:04:30.125 11:19:15 -- common/autotest_common.sh@1494 -- # local bdfs 00:04:30.125 11:19:15 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.125 11:19:15 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:30.125 11:19:15 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:04:30.125 11:19:15 -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:04:30.125 11:19:15 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:30.125 11:19:15 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:30.125 11:19:15 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:30.125 11:19:15 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:30.125 11:19:15 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:30.125 11:19:15 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:30.125 11:19:15 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:30.125 11:19:15 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:30.125 11:19:15 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:30.125 11:19:15 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:30.125 11:19:15 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:30.125 11:19:15 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:30.125 11:19:15 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:30.125 11:19:15 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:30.125 11:19:15 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:30.125 11:19:15 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:30.125 11:19:15 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:30.125 11:19:15 -- common/autotest_common.sh@1568 -- # (( 0 > 0 )) 00:04:30.125 11:19:15 -- common/autotest_common.sh@1568 -- # return 0 00:04:30.125 11:19:15 -- common/autotest_common.sh@1575 -- # [[ -z '' ]] 00:04:30.125 11:19:15 -- common/autotest_common.sh@1576 -- # return 0 00:04:30.125 11:19:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:30.125 11:19:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:30.125 11:19:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:30.125 11:19:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:30.125 11:19:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:30.125 11:19:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.125 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:30.125 11:19:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:30.125 11:19:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:30.125 11:19:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.125 11:19:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.125 11:19:15 -- common/autotest_common.sh@10 -- # set +x 00:04:30.125 ************************************ 00:04:30.125 START TEST env 00:04:30.125 ************************************ 00:04:30.125 11:19:15 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:30.125 * Looking for test storage... 00:04:30.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:30.125 11:19:15 env -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:30.125 11:19:15 env -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:30.125 11:19:15 env -- common/autotest_common.sh@1689 -- # lcov --version 00:04:30.387 11:19:15 env -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:30.387 11:19:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.387 11:19:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.387 11:19:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.387 11:19:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.387 11:19:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.387 11:19:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.387 11:19:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.387 11:19:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.387 11:19:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.387 11:19:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.387 11:19:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.387 11:19:15 env -- scripts/common.sh@344 -- # case "$op" in 00:04:30.387 11:19:15 env -- scripts/common.sh@345 -- # : 1 00:04:30.387 11:19:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.387 11:19:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.387 11:19:15 env -- scripts/common.sh@365 -- # decimal 1 00:04:30.387 11:19:15 env -- scripts/common.sh@353 -- # local d=1 00:04:30.387 11:19:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.387 11:19:15 env -- scripts/common.sh@355 -- # echo 1 00:04:30.387 11:19:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.387 11:19:15 env -- scripts/common.sh@366 -- # decimal 2 00:04:30.387 11:19:15 env -- scripts/common.sh@353 -- # local d=2 00:04:30.387 11:19:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.387 11:19:15 env -- scripts/common.sh@355 -- # echo 2 00:04:30.387 11:19:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.387 11:19:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.387 11:19:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.387 11:19:15 env -- scripts/common.sh@368 -- # return 0 00:04:30.387 11:19:15 env -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.387 11:19:15 env -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:30.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.387 --rc genhtml_branch_coverage=1 00:04:30.387 --rc genhtml_function_coverage=1 00:04:30.387 --rc genhtml_legend=1 00:04:30.387 --rc geninfo_all_blocks=1 00:04:30.387 --rc geninfo_unexecuted_blocks=1 00:04:30.387 00:04:30.387 ' 00:04:30.387 11:19:15 env -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:30.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.387 --rc genhtml_branch_coverage=1 00:04:30.387 --rc genhtml_function_coverage=1 00:04:30.387 --rc genhtml_legend=1 00:04:30.387 --rc geninfo_all_blocks=1 00:04:30.387 --rc geninfo_unexecuted_blocks=1 00:04:30.387 00:04:30.387 ' 00:04:30.387 11:19:15 env -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:30.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.387 --rc genhtml_branch_coverage=1 00:04:30.387 --rc genhtml_function_coverage=1 00:04:30.387 --rc genhtml_legend=1 00:04:30.387 --rc geninfo_all_blocks=1 00:04:30.387 --rc geninfo_unexecuted_blocks=1 00:04:30.387 00:04:30.387 ' 00:04:30.387 11:19:15 env -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:30.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.387 --rc genhtml_branch_coverage=1 00:04:30.387 --rc genhtml_function_coverage=1 00:04:30.387 --rc genhtml_legend=1 00:04:30.387 --rc geninfo_all_blocks=1 00:04:30.387 --rc geninfo_unexecuted_blocks=1 00:04:30.387 00:04:30.387 ' 00:04:30.387 11:19:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:30.387 11:19:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.387 11:19:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.387 11:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.387 ************************************ 00:04:30.387 START TEST env_memory 00:04:30.387 ************************************ 00:04:30.387 11:19:15 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:30.387 00:04:30.387 00:04:30.387 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.387 http://cunit.sourceforge.net/ 00:04:30.387 00:04:30.387 00:04:30.387 Suite: memory 00:04:30.387 Test: alloc and free memory map ...[2024-10-27 11:19:15.519188] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:30.387 passed 00:04:30.387 Test: mem map translation ...[2024-10-27 11:19:15.558056] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:30.387 [2024-10-27 11:19:15.558109] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:30.387 [2024-10-27 11:19:15.558164] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:30.387 [2024-10-27 11:19:15.558180] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:30.387 passed 00:04:30.387 Test: mem map registration ...[2024-10-27 11:19:15.626326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:30.387 [2024-10-27 11:19:15.626376] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:30.387 passed 00:04:30.649 Test: mem map adjacent registrations ...passed 00:04:30.650 00:04:30.650 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.650 suites 1 1 n/a 0 0 00:04:30.650 tests 4 4 4 0 0 00:04:30.650 asserts 152 152 152 0 n/a 00:04:30.650 00:04:30.650 Elapsed time = 0.233 seconds 00:04:30.650 00:04:30.650 real 0m0.271s 00:04:30.650 user 0m0.240s 00:04:30.650 sys 0m0.023s 00:04:30.650 ************************************ 00:04:30.650 END TEST env_memory 00:04:30.650 ************************************ 00:04:30.650 11:19:15 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.650 11:19:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:30.650 11:19:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.650 11:19:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.650 11:19:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.650 11:19:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.650 ************************************ 00:04:30.650 START TEST env_vtophys 00:04:30.650 ************************************ 00:04:30.650 11:19:15 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:30.650 EAL: lib.eal log level changed from notice to debug 00:04:30.650 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.650 EAL: Detected lcore 1 as core 0 on socket 0 00:04:30.650 EAL: Detected lcore 2 as core 0 on socket 0 00:04:30.650 EAL: Detected lcore 3 as core 0 on socket 0 00:04:30.650 EAL: Detected lcore 4 as core 0 on socket 0 00:04:30.650 EAL: Detected lcore 5 as core 0 on socket 0 00:04:30.650 EAL: Detected lcore 6 as core 0 on socket 0 00:04:30.650 EAL: Detected lcore 7 as core 0 on socket 0 00:04:30.650 EAL: Detected lcore 8 as core 0 on socket 0 00:04:30.650 EAL: Detected lcore 9 as core 0 on socket 0 00:04:30.650 EAL: Maximum logical cores by configuration: 128 00:04:30.650 EAL: Detected CPU lcores: 10 00:04:30.650 EAL: Detected NUMA nodes: 1 00:04:30.650 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:30.650 EAL: Detected shared linkage of DPDK 00:04:30.650 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.650 EAL: Selected IOVA mode 'PA' 00:04:30.650 EAL: Probing VFIO support... 00:04:30.650 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.650 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:30.650 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.650 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.650 EAL: Setting up physically contiguous memory... 00:04:30.650 EAL: Setting maximum number of open files to 524288 00:04:30.650 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.650 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.650 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.650 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.650 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.650 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.650 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.650 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.650 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.650 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.650 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.650 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.650 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.650 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.650 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.650 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.650 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.650 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.650 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.650 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.650 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.650 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.650 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.650 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.650 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.650 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.650 EAL: Hugepages will be freed exactly as allocated. 00:04:30.650 EAL: No shared files mode enabled, IPC is disabled 00:04:30.650 EAL: No shared files mode enabled, IPC is disabled 00:04:30.911 EAL: TSC frequency is ~2600000 KHz 00:04:30.911 EAL: Main lcore 0 is ready (tid=7f523b244a40;cpuset=[0]) 00:04:30.911 EAL: Trying to obtain current memory policy. 00:04:30.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.911 EAL: Restoring previous memory policy: 0 00:04:30.911 EAL: request: mp_malloc_sync 00:04:30.911 EAL: No shared files mode enabled, IPC is disabled 00:04:30.911 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.911 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:30.911 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.911 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.911 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:30.911 00:04:30.911 00:04:30.911 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.911 http://cunit.sourceforge.net/ 00:04:30.911 00:04:30.911 00:04:30.911 Suite: components_suite 00:04:31.172 Test: vtophys_malloc_test ...passed 00:04:31.172 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:31.172 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.172 EAL: Restoring previous memory policy: 4 00:04:31.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.172 EAL: request: mp_malloc_sync 00:04:31.172 EAL: No shared files mode enabled, IPC is disabled 00:04:31.172 EAL: Heap on socket 0 was expanded by 4MB 00:04:31.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.172 EAL: request: mp_malloc_sync 00:04:31.172 EAL: No shared files mode enabled, IPC is disabled 00:04:31.172 EAL: Heap on socket 0 was shrunk by 4MB 00:04:31.172 EAL: Trying to obtain current memory policy. 00:04:31.172 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.172 EAL: Restoring previous memory policy: 4 00:04:31.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.172 EAL: request: mp_malloc_sync 00:04:31.172 EAL: No shared files mode enabled, IPC is disabled 00:04:31.172 EAL: Heap on socket 0 was expanded by 6MB 00:04:31.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.172 EAL: request: mp_malloc_sync 00:04:31.172 EAL: No shared files mode enabled, IPC is disabled 00:04:31.172 EAL: Heap on socket 0 was shrunk by 6MB 00:04:31.172 EAL: Trying to obtain current memory policy. 00:04:31.172 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.172 EAL: Restoring previous memory policy: 4 00:04:31.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.172 EAL: request: mp_malloc_sync 00:04:31.172 EAL: No shared files mode enabled, IPC is disabled 00:04:31.172 EAL: Heap on socket 0 was expanded by 10MB 00:04:31.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.172 EAL: request: mp_malloc_sync 00:04:31.172 EAL: No shared files mode enabled, IPC is disabled 00:04:31.172 EAL: Heap on socket 0 was shrunk by 10MB 00:04:31.172 EAL: Trying to obtain current memory policy. 00:04:31.172 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.172 EAL: Restoring previous memory policy: 4 00:04:31.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.172 EAL: request: mp_malloc_sync 00:04:31.172 EAL: No shared files mode enabled, IPC is disabled 00:04:31.172 EAL: Heap on socket 0 was expanded by 18MB 00:04:31.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.172 EAL: request: mp_malloc_sync 00:04:31.172 EAL: No shared files mode enabled, IPC is disabled 00:04:31.172 EAL: Heap on socket 0 was shrunk by 18MB 00:04:31.172 EAL: Trying to obtain current memory policy. 00:04:31.172 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.172 EAL: Restoring previous memory policy: 4 00:04:31.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.172 EAL: request: mp_malloc_sync 00:04:31.172 EAL: No shared files mode enabled, IPC is disabled 00:04:31.172 EAL: Heap on socket 0 was expanded by 34MB 00:04:31.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.172 EAL: request: mp_malloc_sync 00:04:31.172 EAL: No shared files mode enabled, IPC is disabled 00:04:31.172 EAL: Heap on socket 0 was shrunk by 34MB 00:04:31.433 EAL: Trying to obtain current memory policy. 00:04:31.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.433 EAL: Restoring previous memory policy: 4 00:04:31.433 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.433 EAL: request: mp_malloc_sync 00:04:31.433 EAL: No shared files mode enabled, IPC is disabled 00:04:31.433 EAL: Heap on socket 0 was expanded by 66MB 00:04:31.433 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.433 EAL: request: mp_malloc_sync 00:04:31.433 EAL: No shared files mode enabled, IPC is disabled 00:04:31.433 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.433 EAL: Trying to obtain current memory policy. 00:04:31.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.433 EAL: Restoring previous memory policy: 4 00:04:31.433 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.433 EAL: request: mp_malloc_sync 00:04:31.433 EAL: No shared files mode enabled, IPC is disabled 00:04:31.433 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.694 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.694 EAL: request: mp_malloc_sync 00:04:31.694 EAL: No shared files mode enabled, IPC is disabled 00:04:31.694 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.694 EAL: Trying to obtain current memory policy. 00:04:31.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.955 EAL: Restoring previous memory policy: 4 00:04:31.955 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.955 EAL: request: mp_malloc_sync 00:04:31.955 EAL: No shared files mode enabled, IPC is disabled 00:04:31.955 EAL: Heap on socket 0 was expanded by 258MB 00:04:32.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.215 EAL: request: mp_malloc_sync 00:04:32.215 EAL: No shared files mode enabled, IPC is disabled 00:04:32.215 EAL: Heap on socket 0 was shrunk by 258MB 00:04:32.473 EAL: Trying to obtain current memory policy. 00:04:32.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.473 EAL: Restoring previous memory policy: 4 00:04:32.473 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.473 EAL: request: mp_malloc_sync 00:04:32.473 EAL: No shared files mode enabled, IPC is disabled 00:04:32.473 EAL: Heap on socket 0 was expanded by 514MB 00:04:33.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.304 EAL: request: mp_malloc_sync 00:04:33.304 EAL: No shared files mode enabled, IPC is disabled 00:04:33.304 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.875 EAL: Trying to obtain current memory policy. 00:04:33.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.875 EAL: Restoring previous memory policy: 4 00:04:33.875 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.875 EAL: request: mp_malloc_sync 00:04:33.875 EAL: No shared files mode enabled, IPC is disabled 00:04:33.875 EAL: Heap on socket 0 was expanded by 1026MB 00:04:35.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.260 EAL: request: mp_malloc_sync 00:04:35.260 EAL: No shared files mode enabled, IPC is disabled 00:04:35.260 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:35.832 passed 00:04:35.832 00:04:35.832 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.832 suites 1 1 n/a 0 0 00:04:35.832 tests 2 2 2 0 0 00:04:35.832 asserts 5824 5824 5824 0 n/a 00:04:35.832 00:04:35.832 Elapsed time = 5.050 seconds 00:04:35.832 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.832 EAL: request: mp_malloc_sync 00:04:35.832 EAL: No shared files mode enabled, IPC is disabled 00:04:35.832 EAL: Heap on socket 0 was shrunk by 2MB 00:04:35.832 EAL: No shared files mode enabled, IPC is disabled 00:04:35.832 EAL: No shared files mode enabled, IPC is disabled 00:04:35.832 EAL: No shared files mode enabled, IPC is disabled 00:04:36.094 00:04:36.094 real 0m5.329s 00:04:36.094 user 0m4.366s 00:04:36.094 sys 0m0.810s 00:04:36.094 11:19:21 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.094 ************************************ 00:04:36.094 END TEST env_vtophys 00:04:36.094 ************************************ 00:04:36.094 11:19:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:36.094 11:19:21 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:36.094 11:19:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.094 11:19:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.094 11:19:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.094 ************************************ 00:04:36.094 START TEST env_pci 00:04:36.094 ************************************ 00:04:36.094 11:19:21 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:36.094 00:04:36.094 00:04:36.094 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.094 http://cunit.sourceforge.net/ 00:04:36.094 00:04:36.094 00:04:36.094 Suite: pci 00:04:36.094 Test: pci_hook ...[2024-10-27 11:19:21.204980] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56959 has claimed it 00:04:36.094 passed 00:04:36.094 00:04:36.094 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.094 suites 1 1 n/a 0 0 00:04:36.094 tests 1 1 1 0 0 00:04:36.094 asserts 25 25 25 0 n/a 00:04:36.094 00:04:36.094 Elapsed time = 0.006 seconds 00:04:36.094 EAL: Cannot find device (10000:00:01.0) 00:04:36.094 EAL: Failed to attach device on primary process 00:04:36.094 ************************************ 00:04:36.094 END TEST env_pci 00:04:36.094 ************************************ 00:04:36.094 00:04:36.094 real 0m0.057s 00:04:36.094 user 0m0.027s 00:04:36.094 sys 0m0.030s 00:04:36.094 11:19:21 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.094 11:19:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:36.094 11:19:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:36.094 11:19:21 env -- env/env.sh@15 -- # uname 00:04:36.094 11:19:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:36.094 11:19:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:36.094 11:19:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.094 11:19:21 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:36.094 11:19:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.094 11:19:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.094 ************************************ 00:04:36.094 START TEST env_dpdk_post_init 00:04:36.094 ************************************ 00:04:36.094 11:19:21 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.094 EAL: Detected CPU lcores: 10 00:04:36.094 EAL: Detected NUMA nodes: 1 00:04:36.094 EAL: Detected shared linkage of DPDK 00:04:36.094 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.094 EAL: Selected IOVA mode 'PA' 00:04:36.357 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.357 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:36.357 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:36.357 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:36.357 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:36.357 Starting DPDK initialization... 00:04:36.357 Starting SPDK post initialization... 00:04:36.357 SPDK NVMe probe 00:04:36.357 Attaching to 0000:00:10.0 00:04:36.357 Attaching to 0000:00:11.0 00:04:36.357 Attaching to 0000:00:12.0 00:04:36.357 Attaching to 0000:00:13.0 00:04:36.357 Attached to 0000:00:10.0 00:04:36.357 Attached to 0000:00:11.0 00:04:36.357 Attached to 0000:00:13.0 00:04:36.357 Attached to 0000:00:12.0 00:04:36.357 Cleaning up... 00:04:36.357 00:04:36.357 real 0m0.243s 00:04:36.357 user 0m0.078s 00:04:36.357 sys 0m0.065s 00:04:36.357 ************************************ 00:04:36.357 END TEST env_dpdk_post_init 00:04:36.357 ************************************ 00:04:36.357 11:19:21 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.357 11:19:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.357 11:19:21 env -- env/env.sh@26 -- # uname 00:04:36.357 11:19:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.357 11:19:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.357 11:19:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.357 11:19:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.357 11:19:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.357 ************************************ 00:04:36.357 START TEST env_mem_callbacks 00:04:36.357 ************************************ 00:04:36.357 11:19:21 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.357 EAL: Detected CPU lcores: 10 00:04:36.357 EAL: Detected NUMA nodes: 1 00:04:36.357 EAL: Detected shared linkage of DPDK 00:04:36.620 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.620 EAL: Selected IOVA mode 'PA' 00:04:36.620 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.620 00:04:36.620 00:04:36.620 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.620 http://cunit.sourceforge.net/ 00:04:36.620 00:04:36.620 00:04:36.620 Suite: memory 00:04:36.620 Test: test ... 00:04:36.620 register 0x200000200000 2097152 00:04:36.620 malloc 3145728 00:04:36.620 register 0x200000400000 4194304 00:04:36.620 buf 0x2000004fffc0 len 3145728 PASSED 00:04:36.620 malloc 64 00:04:36.620 buf 0x2000004ffec0 len 64 PASSED 00:04:36.620 malloc 4194304 00:04:36.620 register 0x200000800000 6291456 00:04:36.620 buf 0x2000009fffc0 len 4194304 PASSED 00:04:36.620 free 0x2000004fffc0 3145728 00:04:36.620 free 0x2000004ffec0 64 00:04:36.620 unregister 0x200000400000 4194304 PASSED 00:04:36.620 free 0x2000009fffc0 4194304 00:04:36.620 unregister 0x200000800000 6291456 PASSED 00:04:36.620 malloc 8388608 00:04:36.620 register 0x200000400000 10485760 00:04:36.620 buf 0x2000005fffc0 len 8388608 PASSED 00:04:36.620 free 0x2000005fffc0 8388608 00:04:36.620 unregister 0x200000400000 10485760 PASSED 00:04:36.620 passed 00:04:36.620 00:04:36.620 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.620 suites 1 1 n/a 0 0 00:04:36.620 tests 1 1 1 0 0 00:04:36.620 asserts 15 15 15 0 n/a 00:04:36.620 00:04:36.620 Elapsed time = 0.038 seconds 00:04:36.620 00:04:36.620 real 0m0.210s 00:04:36.620 user 0m0.058s 00:04:36.620 sys 0m0.048s 00:04:36.620 11:19:21 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.620 11:19:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:36.620 ************************************ 00:04:36.620 END TEST env_mem_callbacks 00:04:36.620 ************************************ 00:04:36.620 00:04:36.620 real 0m6.541s 00:04:36.620 user 0m4.925s 00:04:36.620 sys 0m1.176s 00:04:36.620 11:19:21 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.620 11:19:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.620 ************************************ 00:04:36.620 END TEST env 00:04:36.620 ************************************ 00:04:36.620 11:19:21 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:36.620 11:19:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.620 11:19:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.620 11:19:21 -- common/autotest_common.sh@10 -- # set +x 00:04:36.620 ************************************ 00:04:36.620 START TEST rpc 00:04:36.620 ************************************ 00:04:36.620 11:19:21 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:36.882 * Looking for test storage... 00:04:36.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:36.882 11:19:21 rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:36.882 11:19:21 rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:36.882 11:19:21 rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:36.882 11:19:22 rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:36.882 11:19:22 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.882 11:19:22 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.882 11:19:22 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.882 11:19:22 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.882 11:19:22 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.882 11:19:22 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.882 11:19:22 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.882 11:19:22 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.882 11:19:22 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.882 11:19:22 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.882 11:19:22 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.882 11:19:22 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.882 11:19:22 rpc -- scripts/common.sh@345 -- # : 1 00:04:36.882 11:19:22 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.882 11:19:22 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.882 11:19:22 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.882 11:19:22 rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.882 11:19:22 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.882 11:19:22 rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.882 11:19:22 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.882 11:19:22 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.882 11:19:22 rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.882 11:19:22 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.882 11:19:22 rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.882 11:19:22 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.882 11:19:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.882 11:19:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.882 11:19:22 rpc -- scripts/common.sh@368 -- # return 0 00:04:36.882 11:19:22 rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.882 11:19:22 rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:36.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.882 --rc genhtml_branch_coverage=1 00:04:36.882 --rc genhtml_function_coverage=1 00:04:36.882 --rc genhtml_legend=1 00:04:36.883 --rc geninfo_all_blocks=1 00:04:36.883 --rc geninfo_unexecuted_blocks=1 00:04:36.883 00:04:36.883 ' 00:04:36.883 11:19:22 rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:36.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.883 --rc genhtml_branch_coverage=1 00:04:36.883 --rc genhtml_function_coverage=1 00:04:36.883 --rc genhtml_legend=1 00:04:36.883 --rc geninfo_all_blocks=1 00:04:36.883 --rc geninfo_unexecuted_blocks=1 00:04:36.883 00:04:36.883 ' 00:04:36.883 11:19:22 rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:36.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.883 --rc genhtml_branch_coverage=1 00:04:36.883 --rc genhtml_function_coverage=1 00:04:36.883 --rc genhtml_legend=1 00:04:36.883 --rc geninfo_all_blocks=1 00:04:36.883 --rc geninfo_unexecuted_blocks=1 00:04:36.883 00:04:36.883 ' 00:04:36.883 11:19:22 rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:36.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.883 --rc genhtml_branch_coverage=1 00:04:36.883 --rc genhtml_function_coverage=1 00:04:36.883 --rc genhtml_legend=1 00:04:36.883 --rc geninfo_all_blocks=1 00:04:36.883 --rc geninfo_unexecuted_blocks=1 00:04:36.883 00:04:36.883 ' 00:04:36.883 11:19:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57080 00:04:36.883 11:19:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.883 11:19:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57080 00:04:36.883 11:19:22 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:36.883 11:19:22 rpc -- common/autotest_common.sh@831 -- # '[' -z 57080 ']' 00:04:36.883 11:19:22 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.883 11:19:22 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.883 11:19:22 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.883 11:19:22 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.883 11:19:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.883 [2024-10-27 11:19:22.115082] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:04:36.883 [2024-10-27 11:19:22.115370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57080 ] 00:04:37.144 [2024-10-27 11:19:22.272468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.144 [2024-10-27 11:19:22.357064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:37.144 [2024-10-27 11:19:22.357107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57080' to capture a snapshot of events at runtime. 00:04:37.144 [2024-10-27 11:19:22.357115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:37.144 [2024-10-27 11:19:22.357123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:37.144 [2024-10-27 11:19:22.357129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57080 for offline analysis/debug. 00:04:37.144 [2024-10-27 11:19:22.357816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.716 11:19:22 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.716 11:19:22 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:37.716 11:19:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.716 11:19:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.716 11:19:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.716 11:19:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.716 11:19:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.716 11:19:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.716 11:19:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.716 ************************************ 00:04:37.716 START TEST rpc_integrity 00:04:37.716 ************************************ 00:04:37.716 11:19:22 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:37.716 11:19:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.716 11:19:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.716 11:19:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.716 11:19:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.716 11:19:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.716 11:19:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.977 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.977 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.977 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.977 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.977 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.977 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.977 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.977 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.977 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.977 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.977 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.977 { 00:04:37.977 "name": "Malloc0", 00:04:37.977 "aliases": [ 00:04:37.977 "bf03a121-e612-496f-9989-bd71c52c010d" 00:04:37.977 ], 00:04:37.977 "product_name": "Malloc disk", 00:04:37.977 "block_size": 512, 00:04:37.977 "num_blocks": 16384, 00:04:37.977 "uuid": "bf03a121-e612-496f-9989-bd71c52c010d", 00:04:37.977 "assigned_rate_limits": { 00:04:37.977 "rw_ios_per_sec": 0, 00:04:37.977 "rw_mbytes_per_sec": 0, 00:04:37.977 "r_mbytes_per_sec": 0, 00:04:37.977 "w_mbytes_per_sec": 0 00:04:37.977 }, 00:04:37.977 "claimed": false, 00:04:37.977 "zoned": false, 00:04:37.977 "supported_io_types": { 00:04:37.977 "read": true, 00:04:37.977 "write": true, 00:04:37.977 "unmap": true, 00:04:37.977 "flush": true, 00:04:37.977 "reset": true, 00:04:37.977 "nvme_admin": false, 00:04:37.977 "nvme_io": false, 00:04:37.977 "nvme_io_md": false, 00:04:37.977 "write_zeroes": true, 00:04:37.977 "zcopy": true, 00:04:37.977 "get_zone_info": false, 00:04:37.977 "zone_management": false, 00:04:37.977 "zone_append": false, 00:04:37.977 "compare": false, 00:04:37.977 "compare_and_write": false, 00:04:37.977 "abort": true, 00:04:37.977 "seek_hole": false, 00:04:37.977 "seek_data": false, 00:04:37.977 "copy": true, 00:04:37.977 "nvme_iov_md": false 00:04:37.977 }, 00:04:37.977 "memory_domains": [ 00:04:37.977 { 00:04:37.977 "dma_device_id": "system", 00:04:37.977 "dma_device_type": 1 00:04:37.977 }, 00:04:37.977 { 00:04:37.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.977 "dma_device_type": 2 00:04:37.977 } 00:04:37.977 ], 00:04:37.977 "driver_specific": {} 00:04:37.977 } 00:04:37.977 ]' 00:04:37.977 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.977 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.977 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.978 [2024-10-27 11:19:23.069267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.978 [2024-10-27 11:19:23.069401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.978 [2024-10-27 11:19:23.069428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:37.978 [2024-10-27 11:19:23.069438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.978 [2024-10-27 11:19:23.071142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.978 [2024-10-27 11:19:23.071176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.978 Passthru0 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.978 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.978 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.978 { 00:04:37.978 "name": "Malloc0", 00:04:37.978 "aliases": [ 00:04:37.978 "bf03a121-e612-496f-9989-bd71c52c010d" 00:04:37.978 ], 00:04:37.978 "product_name": "Malloc disk", 00:04:37.978 "block_size": 512, 00:04:37.978 "num_blocks": 16384, 00:04:37.978 "uuid": "bf03a121-e612-496f-9989-bd71c52c010d", 00:04:37.978 "assigned_rate_limits": { 00:04:37.978 "rw_ios_per_sec": 0, 00:04:37.978 "rw_mbytes_per_sec": 0, 00:04:37.978 "r_mbytes_per_sec": 0, 00:04:37.978 "w_mbytes_per_sec": 0 00:04:37.978 }, 00:04:37.978 "claimed": true, 00:04:37.978 "claim_type": "exclusive_write", 00:04:37.978 "zoned": false, 00:04:37.978 "supported_io_types": { 00:04:37.978 "read": true, 00:04:37.978 "write": true, 00:04:37.978 "unmap": true, 00:04:37.978 "flush": true, 00:04:37.978 "reset": true, 00:04:37.978 "nvme_admin": false, 00:04:37.978 "nvme_io": false, 00:04:37.978 "nvme_io_md": false, 00:04:37.978 "write_zeroes": true, 00:04:37.978 "zcopy": true, 00:04:37.978 "get_zone_info": false, 00:04:37.978 "zone_management": false, 00:04:37.978 "zone_append": false, 00:04:37.978 "compare": false, 00:04:37.978 "compare_and_write": false, 00:04:37.978 "abort": true, 00:04:37.978 "seek_hole": false, 00:04:37.978 "seek_data": false, 00:04:37.978 "copy": true, 00:04:37.978 "nvme_iov_md": false 00:04:37.978 }, 00:04:37.978 "memory_domains": [ 00:04:37.978 { 00:04:37.978 "dma_device_id": "system", 00:04:37.978 "dma_device_type": 1 00:04:37.978 }, 00:04:37.978 { 00:04:37.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.978 "dma_device_type": 2 00:04:37.978 } 00:04:37.978 ], 00:04:37.978 "driver_specific": {} 00:04:37.978 }, 00:04:37.978 { 00:04:37.978 "name": "Passthru0", 00:04:37.978 "aliases": [ 00:04:37.978 "2b04c4fe-4d04-5877-ba01-d2f272ca2c0c" 00:04:37.978 ], 00:04:37.978 "product_name": "passthru", 00:04:37.978 "block_size": 512, 00:04:37.978 "num_blocks": 16384, 00:04:37.978 "uuid": "2b04c4fe-4d04-5877-ba01-d2f272ca2c0c", 00:04:37.978 "assigned_rate_limits": { 00:04:37.978 "rw_ios_per_sec": 0, 00:04:37.978 "rw_mbytes_per_sec": 0, 00:04:37.978 "r_mbytes_per_sec": 0, 00:04:37.978 "w_mbytes_per_sec": 0 00:04:37.978 }, 00:04:37.978 "claimed": false, 00:04:37.978 "zoned": false, 00:04:37.978 "supported_io_types": { 00:04:37.978 "read": true, 00:04:37.978 "write": true, 00:04:37.978 "unmap": true, 00:04:37.978 "flush": true, 00:04:37.978 "reset": true, 00:04:37.978 "nvme_admin": false, 00:04:37.978 "nvme_io": false, 00:04:37.978 "nvme_io_md": false, 00:04:37.978 "write_zeroes": true, 00:04:37.978 "zcopy": true, 00:04:37.978 "get_zone_info": false, 00:04:37.978 "zone_management": false, 00:04:37.978 "zone_append": false, 00:04:37.978 "compare": false, 00:04:37.978 "compare_and_write": false, 00:04:37.978 "abort": true, 00:04:37.978 "seek_hole": false, 00:04:37.978 "seek_data": false, 00:04:37.978 "copy": true, 00:04:37.978 "nvme_iov_md": false 00:04:37.978 }, 00:04:37.978 "memory_domains": [ 00:04:37.978 { 00:04:37.978 "dma_device_id": "system", 00:04:37.978 "dma_device_type": 1 00:04:37.978 }, 00:04:37.978 { 00:04:37.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.978 "dma_device_type": 2 00:04:37.978 } 00:04:37.978 ], 00:04:37.978 "driver_specific": { 00:04:37.978 "passthru": { 00:04:37.978 "name": "Passthru0", 00:04:37.978 "base_bdev_name": "Malloc0" 00:04:37.978 } 00:04:37.978 } 00:04:37.978 } 00:04:37.978 ]' 00:04:37.978 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.978 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.978 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.978 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.978 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.978 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.978 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.978 ************************************ 00:04:37.978 END TEST rpc_integrity 00:04:37.978 ************************************ 00:04:37.978 11:19:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.978 00:04:37.978 real 0m0.243s 00:04:37.978 user 0m0.128s 00:04:37.978 sys 0m0.041s 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.978 11:19:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.978 11:19:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.978 11:19:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.978 11:19:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.978 11:19:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.978 ************************************ 00:04:37.978 START TEST rpc_plugins 00:04:37.978 ************************************ 00:04:37.978 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:37.978 11:19:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.978 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.978 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.978 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.978 11:19:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.978 11:19:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.978 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.979 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.239 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.239 11:19:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:38.240 { 00:04:38.240 "name": "Malloc1", 00:04:38.240 "aliases": [ 00:04:38.240 "3f96bad5-3527-4360-ae4c-eedb5cef97df" 00:04:38.240 ], 00:04:38.240 "product_name": "Malloc disk", 00:04:38.240 "block_size": 4096, 00:04:38.240 "num_blocks": 256, 00:04:38.240 "uuid": "3f96bad5-3527-4360-ae4c-eedb5cef97df", 00:04:38.240 "assigned_rate_limits": { 00:04:38.240 "rw_ios_per_sec": 0, 00:04:38.240 "rw_mbytes_per_sec": 0, 00:04:38.240 "r_mbytes_per_sec": 0, 00:04:38.240 "w_mbytes_per_sec": 0 00:04:38.240 }, 00:04:38.240 "claimed": false, 00:04:38.240 "zoned": false, 00:04:38.240 "supported_io_types": { 00:04:38.240 "read": true, 00:04:38.240 "write": true, 00:04:38.240 "unmap": true, 00:04:38.240 "flush": true, 00:04:38.240 "reset": true, 00:04:38.240 "nvme_admin": false, 00:04:38.240 "nvme_io": false, 00:04:38.240 "nvme_io_md": false, 00:04:38.240 "write_zeroes": true, 00:04:38.240 "zcopy": true, 00:04:38.240 "get_zone_info": false, 00:04:38.240 "zone_management": false, 00:04:38.240 "zone_append": false, 00:04:38.240 "compare": false, 00:04:38.240 "compare_and_write": false, 00:04:38.240 "abort": true, 00:04:38.240 "seek_hole": false, 00:04:38.240 "seek_data": false, 00:04:38.240 "copy": true, 00:04:38.240 "nvme_iov_md": false 00:04:38.240 }, 00:04:38.240 "memory_domains": [ 00:04:38.240 { 00:04:38.240 "dma_device_id": "system", 00:04:38.240 "dma_device_type": 1 00:04:38.240 }, 00:04:38.240 { 00:04:38.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.240 "dma_device_type": 2 00:04:38.240 } 00:04:38.240 ], 00:04:38.240 "driver_specific": {} 00:04:38.240 } 00:04:38.240 ]' 00:04:38.240 11:19:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:38.240 11:19:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:38.240 11:19:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:38.240 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.240 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.240 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.240 11:19:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:38.240 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.240 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.240 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.240 11:19:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:38.240 11:19:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:38.240 ************************************ 00:04:38.240 END TEST rpc_plugins 00:04:38.240 ************************************ 00:04:38.240 11:19:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:38.240 00:04:38.240 real 0m0.111s 00:04:38.240 user 0m0.062s 00:04:38.240 sys 0m0.016s 00:04:38.240 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.240 11:19:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.240 11:19:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:38.240 11:19:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.240 11:19:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.240 11:19:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.240 ************************************ 00:04:38.240 START TEST rpc_trace_cmd_test 00:04:38.240 ************************************ 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:38.240 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57080", 00:04:38.240 "tpoint_group_mask": "0x8", 00:04:38.240 "iscsi_conn": { 00:04:38.240 "mask": "0x2", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "scsi": { 00:04:38.240 "mask": "0x4", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "bdev": { 00:04:38.240 "mask": "0x8", 00:04:38.240 "tpoint_mask": "0xffffffffffffffff" 00:04:38.240 }, 00:04:38.240 "nvmf_rdma": { 00:04:38.240 "mask": "0x10", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "nvmf_tcp": { 00:04:38.240 "mask": "0x20", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "ftl": { 00:04:38.240 "mask": "0x40", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "blobfs": { 00:04:38.240 "mask": "0x80", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "dsa": { 00:04:38.240 "mask": "0x200", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "thread": { 00:04:38.240 "mask": "0x400", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "nvme_pcie": { 00:04:38.240 "mask": "0x800", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "iaa": { 00:04:38.240 "mask": "0x1000", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "nvme_tcp": { 00:04:38.240 "mask": "0x2000", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "bdev_nvme": { 00:04:38.240 "mask": "0x4000", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "sock": { 00:04:38.240 "mask": "0x8000", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "blob": { 00:04:38.240 "mask": "0x10000", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "bdev_raid": { 00:04:38.240 "mask": "0x20000", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 }, 00:04:38.240 "scheduler": { 00:04:38.240 "mask": "0x40000", 00:04:38.240 "tpoint_mask": "0x0" 00:04:38.240 } 00:04:38.240 }' 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:38.240 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:38.502 ************************************ 00:04:38.502 END TEST rpc_trace_cmd_test 00:04:38.502 ************************************ 00:04:38.502 11:19:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:38.502 00:04:38.502 real 0m0.162s 00:04:38.502 user 0m0.132s 00:04:38.502 sys 0m0.022s 00:04:38.502 11:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.502 11:19:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.502 11:19:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:38.502 11:19:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:38.502 11:19:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:38.502 11:19:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.502 11:19:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.502 11:19:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.502 ************************************ 00:04:38.502 START TEST rpc_daemon_integrity 00:04:38.502 ************************************ 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.502 { 00:04:38.502 "name": "Malloc2", 00:04:38.502 "aliases": [ 00:04:38.502 "eb9114d0-2445-4a90-bb33-213c2e6907c2" 00:04:38.502 ], 00:04:38.502 "product_name": "Malloc disk", 00:04:38.502 "block_size": 512, 00:04:38.502 "num_blocks": 16384, 00:04:38.502 "uuid": "eb9114d0-2445-4a90-bb33-213c2e6907c2", 00:04:38.502 "assigned_rate_limits": { 00:04:38.502 "rw_ios_per_sec": 0, 00:04:38.502 "rw_mbytes_per_sec": 0, 00:04:38.502 "r_mbytes_per_sec": 0, 00:04:38.502 "w_mbytes_per_sec": 0 00:04:38.502 }, 00:04:38.502 "claimed": false, 00:04:38.502 "zoned": false, 00:04:38.502 "supported_io_types": { 00:04:38.502 "read": true, 00:04:38.502 "write": true, 00:04:38.502 "unmap": true, 00:04:38.502 "flush": true, 00:04:38.502 "reset": true, 00:04:38.502 "nvme_admin": false, 00:04:38.502 "nvme_io": false, 00:04:38.502 "nvme_io_md": false, 00:04:38.502 "write_zeroes": true, 00:04:38.502 "zcopy": true, 00:04:38.502 "get_zone_info": false, 00:04:38.502 "zone_management": false, 00:04:38.502 "zone_append": false, 00:04:38.502 "compare": false, 00:04:38.502 "compare_and_write": false, 00:04:38.502 "abort": true, 00:04:38.502 "seek_hole": false, 00:04:38.502 "seek_data": false, 00:04:38.502 "copy": true, 00:04:38.502 "nvme_iov_md": false 00:04:38.502 }, 00:04:38.502 "memory_domains": [ 00:04:38.502 { 00:04:38.502 "dma_device_id": "system", 00:04:38.502 "dma_device_type": 1 00:04:38.502 }, 00:04:38.502 { 00:04:38.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.502 "dma_device_type": 2 00:04:38.502 } 00:04:38.502 ], 00:04:38.502 "driver_specific": {} 00:04:38.502 } 00:04:38.502 ]' 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.502 [2024-10-27 11:19:23.681019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.502 [2024-10-27 11:19:23.681128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.502 [2024-10-27 11:19:23.681160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:38.502 [2024-10-27 11:19:23.681233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.502 [2024-10-27 11:19:23.682912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.502 [2024-10-27 11:19:23.683002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.502 Passthru0 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.502 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.502 { 00:04:38.502 "name": "Malloc2", 00:04:38.502 "aliases": [ 00:04:38.502 "eb9114d0-2445-4a90-bb33-213c2e6907c2" 00:04:38.502 ], 00:04:38.502 "product_name": "Malloc disk", 00:04:38.502 "block_size": 512, 00:04:38.502 "num_blocks": 16384, 00:04:38.502 "uuid": "eb9114d0-2445-4a90-bb33-213c2e6907c2", 00:04:38.502 "assigned_rate_limits": { 00:04:38.502 "rw_ios_per_sec": 0, 00:04:38.502 "rw_mbytes_per_sec": 0, 00:04:38.502 "r_mbytes_per_sec": 0, 00:04:38.502 "w_mbytes_per_sec": 0 00:04:38.502 }, 00:04:38.502 "claimed": true, 00:04:38.502 "claim_type": "exclusive_write", 00:04:38.502 "zoned": false, 00:04:38.502 "supported_io_types": { 00:04:38.502 "read": true, 00:04:38.502 "write": true, 00:04:38.502 "unmap": true, 00:04:38.502 "flush": true, 00:04:38.502 "reset": true, 00:04:38.502 "nvme_admin": false, 00:04:38.502 "nvme_io": false, 00:04:38.502 "nvme_io_md": false, 00:04:38.502 "write_zeroes": true, 00:04:38.502 "zcopy": true, 00:04:38.502 "get_zone_info": false, 00:04:38.503 "zone_management": false, 00:04:38.503 "zone_append": false, 00:04:38.503 "compare": false, 00:04:38.503 "compare_and_write": false, 00:04:38.503 "abort": true, 00:04:38.503 "seek_hole": false, 00:04:38.503 "seek_data": false, 00:04:38.503 "copy": true, 00:04:38.503 "nvme_iov_md": false 00:04:38.503 }, 00:04:38.503 "memory_domains": [ 00:04:38.503 { 00:04:38.503 "dma_device_id": "system", 00:04:38.503 "dma_device_type": 1 00:04:38.503 }, 00:04:38.503 { 00:04:38.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.503 "dma_device_type": 2 00:04:38.503 } 00:04:38.503 ], 00:04:38.503 "driver_specific": {} 00:04:38.503 }, 00:04:38.503 { 00:04:38.503 "name": "Passthru0", 00:04:38.503 "aliases": [ 00:04:38.503 "9d51b439-6985-5448-b167-8f598d47123d" 00:04:38.503 ], 00:04:38.503 "product_name": "passthru", 00:04:38.503 "block_size": 512, 00:04:38.503 "num_blocks": 16384, 00:04:38.503 "uuid": "9d51b439-6985-5448-b167-8f598d47123d", 00:04:38.503 "assigned_rate_limits": { 00:04:38.503 "rw_ios_per_sec": 0, 00:04:38.503 "rw_mbytes_per_sec": 0, 00:04:38.503 "r_mbytes_per_sec": 0, 00:04:38.503 "w_mbytes_per_sec": 0 00:04:38.503 }, 00:04:38.503 "claimed": false, 00:04:38.503 "zoned": false, 00:04:38.503 "supported_io_types": { 00:04:38.503 "read": true, 00:04:38.503 "write": true, 00:04:38.503 "unmap": true, 00:04:38.503 "flush": true, 00:04:38.503 "reset": true, 00:04:38.503 "nvme_admin": false, 00:04:38.503 "nvme_io": false, 00:04:38.503 "nvme_io_md": false, 00:04:38.503 "write_zeroes": true, 00:04:38.503 "zcopy": true, 00:04:38.503 "get_zone_info": false, 00:04:38.503 "zone_management": false, 00:04:38.503 "zone_append": false, 00:04:38.503 "compare": false, 00:04:38.503 "compare_and_write": false, 00:04:38.503 "abort": true, 00:04:38.503 "seek_hole": false, 00:04:38.503 "seek_data": false, 00:04:38.503 "copy": true, 00:04:38.503 "nvme_iov_md": false 00:04:38.503 }, 00:04:38.503 "memory_domains": [ 00:04:38.503 { 00:04:38.503 "dma_device_id": "system", 00:04:38.503 "dma_device_type": 1 00:04:38.503 }, 00:04:38.503 { 00:04:38.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.503 "dma_device_type": 2 00:04:38.503 } 00:04:38.503 ], 00:04:38.503 "driver_specific": { 00:04:38.503 "passthru": { 00:04:38.503 "name": "Passthru0", 00:04:38.503 "base_bdev_name": "Malloc2" 00:04:38.503 } 00:04:38.503 } 00:04:38.503 } 00:04:38.503 ]' 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.503 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.764 ************************************ 00:04:38.764 END TEST rpc_daemon_integrity 00:04:38.764 ************************************ 00:04:38.764 11:19:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.764 00:04:38.764 real 0m0.238s 00:04:38.764 user 0m0.132s 00:04:38.764 sys 0m0.029s 00:04:38.764 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.764 11:19:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.764 11:19:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.764 11:19:23 rpc -- rpc/rpc.sh@84 -- # killprocess 57080 00:04:38.764 11:19:23 rpc -- common/autotest_common.sh@950 -- # '[' -z 57080 ']' 00:04:38.764 11:19:23 rpc -- common/autotest_common.sh@954 -- # kill -0 57080 00:04:38.764 11:19:23 rpc -- common/autotest_common.sh@955 -- # uname 00:04:38.764 11:19:23 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.764 11:19:23 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57080 00:04:38.764 killing process with pid 57080 00:04:38.764 11:19:23 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.764 11:19:23 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.764 11:19:23 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57080' 00:04:38.764 11:19:23 rpc -- common/autotest_common.sh@969 -- # kill 57080 00:04:38.764 11:19:23 rpc -- common/autotest_common.sh@974 -- # wait 57080 00:04:40.139 00:04:40.139 real 0m3.101s 00:04:40.139 user 0m3.536s 00:04:40.139 sys 0m0.563s 00:04:40.139 11:19:24 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.139 ************************************ 00:04:40.139 END TEST rpc 00:04:40.139 11:19:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.139 ************************************ 00:04:40.139 11:19:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.139 11:19:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.139 11:19:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.139 11:19:25 -- common/autotest_common.sh@10 -- # set +x 00:04:40.139 ************************************ 00:04:40.139 START TEST skip_rpc 00:04:40.139 ************************************ 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.139 * Looking for test storage... 00:04:40.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.139 11:19:25 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:40.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.139 --rc genhtml_branch_coverage=1 00:04:40.139 --rc genhtml_function_coverage=1 00:04:40.139 --rc genhtml_legend=1 00:04:40.139 --rc geninfo_all_blocks=1 00:04:40.139 --rc geninfo_unexecuted_blocks=1 00:04:40.139 00:04:40.139 ' 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:40.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.139 --rc genhtml_branch_coverage=1 00:04:40.139 --rc genhtml_function_coverage=1 00:04:40.139 --rc genhtml_legend=1 00:04:40.139 --rc geninfo_all_blocks=1 00:04:40.139 --rc geninfo_unexecuted_blocks=1 00:04:40.139 00:04:40.139 ' 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:40.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.139 --rc genhtml_branch_coverage=1 00:04:40.139 --rc genhtml_function_coverage=1 00:04:40.139 --rc genhtml_legend=1 00:04:40.139 --rc geninfo_all_blocks=1 00:04:40.139 --rc geninfo_unexecuted_blocks=1 00:04:40.139 00:04:40.139 ' 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:40.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.139 --rc genhtml_branch_coverage=1 00:04:40.139 --rc genhtml_function_coverage=1 00:04:40.139 --rc genhtml_legend=1 00:04:40.139 --rc geninfo_all_blocks=1 00:04:40.139 --rc geninfo_unexecuted_blocks=1 00:04:40.139 00:04:40.139 ' 00:04:40.139 11:19:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.139 11:19:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.139 11:19:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.139 11:19:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.140 ************************************ 00:04:40.140 START TEST skip_rpc 00:04:40.140 ************************************ 00:04:40.140 11:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:40.140 11:19:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57287 00:04:40.140 11:19:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.140 11:19:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:40.140 11:19:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:40.140 [2024-10-27 11:19:25.288778] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:04:40.140 [2024-10-27 11:19:25.288894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57287 ] 00:04:40.399 [2024-10-27 11:19:25.445067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.399 [2024-10-27 11:19:25.519879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57287 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57287 ']' 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57287 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57287 00:04:45.680 killing process with pid 57287 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57287' 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57287 00:04:45.680 11:19:30 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57287 00:04:46.248 ************************************ 00:04:46.248 END TEST skip_rpc 00:04:46.248 ************************************ 00:04:46.248 00:04:46.248 real 0m6.200s 00:04:46.248 user 0m5.853s 00:04:46.248 sys 0m0.245s 00:04:46.248 11:19:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.248 11:19:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.248 11:19:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.248 11:19:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.248 11:19:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.248 11:19:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.248 ************************************ 00:04:46.248 START TEST skip_rpc_with_json 00:04:46.248 ************************************ 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57386 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57386 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57386 ']' 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.248 11:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.507 [2024-10-27 11:19:31.538614] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:04:46.507 [2024-10-27 11:19:31.538728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57386 ] 00:04:46.507 [2024-10-27 11:19:31.684035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.507 [2024-10-27 11:19:31.756937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.074 [2024-10-27 11:19:32.321368] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.074 request: 00:04:47.074 { 00:04:47.074 "trtype": "tcp", 00:04:47.074 "method": "nvmf_get_transports", 00:04:47.074 "req_id": 1 00:04:47.074 } 00:04:47.074 Got JSON-RPC error response 00:04:47.074 response: 00:04:47.074 { 00:04:47.074 "code": -19, 00:04:47.074 "message": "No such device" 00:04:47.074 } 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.074 [2024-10-27 11:19:32.333445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.074 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.333 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.333 11:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.333 { 00:04:47.333 "subsystems": [ 00:04:47.333 { 00:04:47.333 "subsystem": "fsdev", 00:04:47.333 "config": [ 00:04:47.333 { 00:04:47.333 "method": "fsdev_set_opts", 00:04:47.333 "params": { 00:04:47.333 "fsdev_io_pool_size": 65535, 00:04:47.333 "fsdev_io_cache_size": 256 00:04:47.333 } 00:04:47.333 } 00:04:47.333 ] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "keyring", 00:04:47.333 "config": [] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "iobuf", 00:04:47.333 "config": [ 00:04:47.333 { 00:04:47.333 "method": "iobuf_set_options", 00:04:47.333 "params": { 00:04:47.333 "small_pool_count": 8192, 00:04:47.333 "large_pool_count": 1024, 00:04:47.333 "small_bufsize": 8192, 00:04:47.333 "large_bufsize": 135168, 00:04:47.333 "enable_numa": false 00:04:47.333 } 00:04:47.333 } 00:04:47.333 ] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "sock", 00:04:47.333 "config": [ 00:04:47.333 { 00:04:47.333 "method": "sock_set_default_impl", 00:04:47.333 "params": { 00:04:47.333 "impl_name": "posix" 00:04:47.333 } 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "method": "sock_impl_set_options", 00:04:47.333 "params": { 00:04:47.333 "impl_name": "ssl", 00:04:47.333 "recv_buf_size": 4096, 00:04:47.333 "send_buf_size": 4096, 00:04:47.333 "enable_recv_pipe": true, 00:04:47.333 "enable_quickack": false, 00:04:47.333 "enable_placement_id": 0, 00:04:47.333 "enable_zerocopy_send_server": true, 00:04:47.333 "enable_zerocopy_send_client": false, 00:04:47.333 "zerocopy_threshold": 0, 00:04:47.333 "tls_version": 0, 00:04:47.333 "enable_ktls": false 00:04:47.333 } 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "method": "sock_impl_set_options", 00:04:47.333 "params": { 00:04:47.333 "impl_name": "posix", 00:04:47.333 "recv_buf_size": 2097152, 00:04:47.333 "send_buf_size": 2097152, 00:04:47.333 "enable_recv_pipe": true, 00:04:47.333 "enable_quickack": false, 00:04:47.333 "enable_placement_id": 0, 00:04:47.333 "enable_zerocopy_send_server": true, 00:04:47.333 "enable_zerocopy_send_client": false, 00:04:47.333 "zerocopy_threshold": 0, 00:04:47.333 "tls_version": 0, 00:04:47.333 "enable_ktls": false 00:04:47.333 } 00:04:47.333 } 00:04:47.333 ] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "vmd", 00:04:47.333 "config": [] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "accel", 00:04:47.333 "config": [ 00:04:47.333 { 00:04:47.333 "method": "accel_set_options", 00:04:47.333 "params": { 00:04:47.333 "small_cache_size": 128, 00:04:47.333 "large_cache_size": 16, 00:04:47.333 "task_count": 2048, 00:04:47.333 "sequence_count": 2048, 00:04:47.333 "buf_count": 2048 00:04:47.333 } 00:04:47.333 } 00:04:47.333 ] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "bdev", 00:04:47.333 "config": [ 00:04:47.333 { 00:04:47.333 "method": "bdev_set_options", 00:04:47.333 "params": { 00:04:47.333 "bdev_io_pool_size": 65535, 00:04:47.333 "bdev_io_cache_size": 256, 00:04:47.333 "bdev_auto_examine": true, 00:04:47.333 "iobuf_small_cache_size": 128, 00:04:47.333 "iobuf_large_cache_size": 16 00:04:47.333 } 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "method": "bdev_raid_set_options", 00:04:47.333 "params": { 00:04:47.333 "process_window_size_kb": 1024, 00:04:47.333 "process_max_bandwidth_mb_sec": 0 00:04:47.333 } 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "method": "bdev_iscsi_set_options", 00:04:47.333 "params": { 00:04:47.333 "timeout_sec": 30 00:04:47.333 } 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "method": "bdev_nvme_set_options", 00:04:47.333 "params": { 00:04:47.333 "action_on_timeout": "none", 00:04:47.333 "timeout_us": 0, 00:04:47.333 "timeout_admin_us": 0, 00:04:47.333 "keep_alive_timeout_ms": 10000, 00:04:47.333 "arbitration_burst": 0, 00:04:47.333 "low_priority_weight": 0, 00:04:47.333 "medium_priority_weight": 0, 00:04:47.333 "high_priority_weight": 0, 00:04:47.333 "nvme_adminq_poll_period_us": 10000, 00:04:47.333 "nvme_ioq_poll_period_us": 0, 00:04:47.333 "io_queue_requests": 0, 00:04:47.333 "delay_cmd_submit": true, 00:04:47.333 "transport_retry_count": 4, 00:04:47.333 "bdev_retry_count": 3, 00:04:47.333 "transport_ack_timeout": 0, 00:04:47.333 "ctrlr_loss_timeout_sec": 0, 00:04:47.333 "reconnect_delay_sec": 0, 00:04:47.333 "fast_io_fail_timeout_sec": 0, 00:04:47.333 "disable_auto_failback": false, 00:04:47.333 "generate_uuids": false, 00:04:47.333 "transport_tos": 0, 00:04:47.333 "nvme_error_stat": false, 00:04:47.333 "rdma_srq_size": 0, 00:04:47.333 "io_path_stat": false, 00:04:47.333 "allow_accel_sequence": false, 00:04:47.333 "rdma_max_cq_size": 0, 00:04:47.333 "rdma_cm_event_timeout_ms": 0, 00:04:47.333 "dhchap_digests": [ 00:04:47.333 "sha256", 00:04:47.333 "sha384", 00:04:47.333 "sha512" 00:04:47.333 ], 00:04:47.333 "dhchap_dhgroups": [ 00:04:47.333 "null", 00:04:47.333 "ffdhe2048", 00:04:47.333 "ffdhe3072", 00:04:47.333 "ffdhe4096", 00:04:47.333 "ffdhe6144", 00:04:47.333 "ffdhe8192" 00:04:47.333 ] 00:04:47.333 } 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "method": "bdev_nvme_set_hotplug", 00:04:47.333 "params": { 00:04:47.333 "period_us": 100000, 00:04:47.333 "enable": false 00:04:47.333 } 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "method": "bdev_wait_for_examine" 00:04:47.333 } 00:04:47.333 ] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "scsi", 00:04:47.333 "config": null 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "scheduler", 00:04:47.333 "config": [ 00:04:47.333 { 00:04:47.333 "method": "framework_set_scheduler", 00:04:47.333 "params": { 00:04:47.333 "name": "static" 00:04:47.333 } 00:04:47.333 } 00:04:47.333 ] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "vhost_scsi", 00:04:47.333 "config": [] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "vhost_blk", 00:04:47.333 "config": [] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "ublk", 00:04:47.333 "config": [] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "nbd", 00:04:47.333 "config": [] 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "subsystem": "nvmf", 00:04:47.333 "config": [ 00:04:47.333 { 00:04:47.333 "method": "nvmf_set_config", 00:04:47.333 "params": { 00:04:47.333 "discovery_filter": "match_any", 00:04:47.333 "admin_cmd_passthru": { 00:04:47.333 "identify_ctrlr": false 00:04:47.333 }, 00:04:47.333 "dhchap_digests": [ 00:04:47.333 "sha256", 00:04:47.333 "sha384", 00:04:47.333 "sha512" 00:04:47.333 ], 00:04:47.333 "dhchap_dhgroups": [ 00:04:47.333 "null", 00:04:47.333 "ffdhe2048", 00:04:47.333 "ffdhe3072", 00:04:47.333 "ffdhe4096", 00:04:47.333 "ffdhe6144", 00:04:47.333 "ffdhe8192" 00:04:47.333 ] 00:04:47.333 } 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "method": "nvmf_set_max_subsystems", 00:04:47.333 "params": { 00:04:47.333 "max_subsystems": 1024 00:04:47.333 } 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "method": "nvmf_set_crdt", 00:04:47.333 "params": { 00:04:47.333 "crdt1": 0, 00:04:47.333 "crdt2": 0, 00:04:47.333 "crdt3": 0 00:04:47.333 } 00:04:47.333 }, 00:04:47.333 { 00:04:47.333 "method": "nvmf_create_transport", 00:04:47.333 "params": { 00:04:47.334 "trtype": "TCP", 00:04:47.334 "max_queue_depth": 128, 00:04:47.334 "max_io_qpairs_per_ctrlr": 127, 00:04:47.334 "in_capsule_data_size": 4096, 00:04:47.334 "max_io_size": 131072, 00:04:47.334 "io_unit_size": 131072, 00:04:47.334 "max_aq_depth": 128, 00:04:47.334 "num_shared_buffers": 511, 00:04:47.334 "buf_cache_size": 4294967295, 00:04:47.334 "dif_insert_or_strip": false, 00:04:47.334 "zcopy": false, 00:04:47.334 "c2h_success": true, 00:04:47.334 "sock_priority": 0, 00:04:47.334 "abort_timeout_sec": 1, 00:04:47.334 "ack_timeout": 0, 00:04:47.334 "data_wr_pool_size": 0 00:04:47.334 } 00:04:47.334 } 00:04:47.334 ] 00:04:47.334 }, 00:04:47.334 { 00:04:47.334 "subsystem": "iscsi", 00:04:47.334 "config": [ 00:04:47.334 { 00:04:47.334 "method": "iscsi_set_options", 00:04:47.334 "params": { 00:04:47.334 "node_base": "iqn.2016-06.io.spdk", 00:04:47.334 "max_sessions": 128, 00:04:47.334 "max_connections_per_session": 2, 00:04:47.334 "max_queue_depth": 64, 00:04:47.334 "default_time2wait": 2, 00:04:47.334 "default_time2retain": 20, 00:04:47.334 "first_burst_length": 8192, 00:04:47.334 "immediate_data": true, 00:04:47.334 "allow_duplicated_isid": false, 00:04:47.334 "error_recovery_level": 0, 00:04:47.334 "nop_timeout": 60, 00:04:47.334 "nop_in_interval": 30, 00:04:47.334 "disable_chap": false, 00:04:47.334 "require_chap": false, 00:04:47.334 "mutual_chap": false, 00:04:47.334 "chap_group": 0, 00:04:47.334 "max_large_datain_per_connection": 64, 00:04:47.334 "max_r2t_per_connection": 4, 00:04:47.334 "pdu_pool_size": 36864, 00:04:47.334 "immediate_data_pool_size": 16384, 00:04:47.334 "data_out_pool_size": 2048 00:04:47.334 } 00:04:47.334 } 00:04:47.334 ] 00:04:47.334 } 00:04:47.334 ] 00:04:47.334 } 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57386 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57386 ']' 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57386 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57386 00:04:47.334 killing process with pid 57386 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57386' 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57386 00:04:47.334 11:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57386 00:04:48.797 11:19:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57420 00:04:48.797 11:19:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:48.797 11:19:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.058 11:19:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57420 00:04:54.058 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57420 ']' 00:04:54.058 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57420 00:04:54.058 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:54.058 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.058 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57420 00:04:54.058 killing process with pid 57420 00:04:54.058 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.059 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.059 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57420' 00:04:54.059 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57420 00:04:54.059 11:19:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57420 00:04:54.624 11:19:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.624 11:19:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.624 00:04:54.624 real 0m8.402s 00:04:54.624 user 0m8.054s 00:04:54.624 sys 0m0.527s 00:04:54.624 11:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.624 ************************************ 00:04:54.624 END TEST skip_rpc_with_json 00:04:54.624 ************************************ 00:04:54.624 11:19:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.882 11:19:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:54.882 11:19:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.882 11:19:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.882 11:19:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.882 ************************************ 00:04:54.882 START TEST skip_rpc_with_delay 00:04:54.882 ************************************ 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:54.882 11:19:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.882 [2024-10-27 11:19:39.986473] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:54.882 11:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:54.882 11:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.882 11:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:54.882 11:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.882 00:04:54.882 real 0m0.104s 00:04:54.882 user 0m0.057s 00:04:54.882 sys 0m0.046s 00:04:54.882 11:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.882 ************************************ 00:04:54.882 END TEST skip_rpc_with_delay 00:04:54.882 ************************************ 00:04:54.882 11:19:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:54.882 11:19:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:54.882 11:19:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:54.882 11:19:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:54.882 11:19:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.882 11:19:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.882 11:19:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.882 ************************************ 00:04:54.882 START TEST exit_on_failed_rpc_init 00:04:54.882 ************************************ 00:04:54.882 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:54.882 11:19:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57542 00:04:54.882 11:19:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57542 00:04:54.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.882 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57542 ']' 00:04:54.882 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.882 11:19:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.882 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.882 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.882 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.882 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.882 [2024-10-27 11:19:40.146277] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:04:54.883 [2024-10-27 11:19:40.146405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57542 ] 00:04:55.140 [2024-10-27 11:19:40.306640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.140 [2024-10-27 11:19:40.400540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:55.709 11:19:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.967 [2024-10-27 11:19:41.053746] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:04:55.967 [2024-10-27 11:19:41.053860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57555 ] 00:04:55.967 [2024-10-27 11:19:41.214261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.225 [2024-10-27 11:19:41.308778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.225 [2024-10-27 11:19:41.308987] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:56.225 [2024-10-27 11:19:41.309005] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:56.226 [2024-10-27 11:19:41.309018] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57542 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57542 ']' 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57542 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.226 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57542 00:04:56.485 killing process with pid 57542 00:04:56.485 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.485 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.485 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57542' 00:04:56.485 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57542 00:04:56.485 11:19:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57542 00:04:57.420 ************************************ 00:04:57.420 END TEST exit_on_failed_rpc_init 00:04:57.420 ************************************ 00:04:57.420 00:04:57.420 real 0m2.574s 00:04:57.420 user 0m2.863s 00:04:57.420 sys 0m0.414s 00:04:57.420 11:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.420 11:19:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.420 11:19:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:57.420 00:04:57.420 real 0m17.630s 00:04:57.420 user 0m16.971s 00:04:57.420 sys 0m1.397s 00:04:57.420 11:19:42 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.420 11:19:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.420 ************************************ 00:04:57.420 END TEST skip_rpc 00:04:57.420 ************************************ 00:04:57.678 11:19:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:57.678 11:19:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.678 11:19:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.678 11:19:42 -- common/autotest_common.sh@10 -- # set +x 00:04:57.678 ************************************ 00:04:57.678 START TEST rpc_client 00:04:57.678 ************************************ 00:04:57.678 11:19:42 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:57.678 * Looking for test storage... 00:04:57.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:57.678 11:19:42 rpc_client -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:57.678 11:19:42 rpc_client -- common/autotest_common.sh@1689 -- # lcov --version 00:04:57.678 11:19:42 rpc_client -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:57.678 11:19:42 rpc_client -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:57.678 11:19:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.678 11:19:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.678 11:19:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.678 11:19:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.679 11:19:42 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:57.679 11:19:42 rpc_client -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.679 11:19:42 rpc_client -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.679 --rc genhtml_branch_coverage=1 00:04:57.679 --rc genhtml_function_coverage=1 00:04:57.679 --rc genhtml_legend=1 00:04:57.679 --rc geninfo_all_blocks=1 00:04:57.679 --rc geninfo_unexecuted_blocks=1 00:04:57.679 00:04:57.679 ' 00:04:57.679 11:19:42 rpc_client -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.679 --rc genhtml_branch_coverage=1 00:04:57.679 --rc genhtml_function_coverage=1 00:04:57.679 --rc genhtml_legend=1 00:04:57.679 --rc geninfo_all_blocks=1 00:04:57.679 --rc geninfo_unexecuted_blocks=1 00:04:57.679 00:04:57.679 ' 00:04:57.679 11:19:42 rpc_client -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.679 --rc genhtml_branch_coverage=1 00:04:57.679 --rc genhtml_function_coverage=1 00:04:57.679 --rc genhtml_legend=1 00:04:57.679 --rc geninfo_all_blocks=1 00:04:57.679 --rc geninfo_unexecuted_blocks=1 00:04:57.679 00:04:57.679 ' 00:04:57.679 11:19:42 rpc_client -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:57.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.679 --rc genhtml_branch_coverage=1 00:04:57.679 --rc genhtml_function_coverage=1 00:04:57.679 --rc genhtml_legend=1 00:04:57.679 --rc geninfo_all_blocks=1 00:04:57.679 --rc geninfo_unexecuted_blocks=1 00:04:57.679 00:04:57.679 ' 00:04:57.679 11:19:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:57.679 OK 00:04:57.679 11:19:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:57.679 00:04:57.679 real 0m0.183s 00:04:57.679 user 0m0.098s 00:04:57.679 sys 0m0.095s 00:04:57.679 11:19:42 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.679 11:19:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:57.679 ************************************ 00:04:57.679 END TEST rpc_client 00:04:57.679 ************************************ 00:04:57.679 11:19:42 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:57.679 11:19:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.679 11:19:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.679 11:19:42 -- common/autotest_common.sh@10 -- # set +x 00:04:57.679 ************************************ 00:04:57.679 START TEST json_config 00:04:57.679 ************************************ 00:04:57.679 11:19:42 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:57.938 11:19:42 json_config -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:57.938 11:19:43 json_config -- common/autotest_common.sh@1689 -- # lcov --version 00:04:57.938 11:19:43 json_config -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:57.938 11:19:43 json_config -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:57.938 11:19:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.938 11:19:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.938 11:19:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.938 11:19:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.938 11:19:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.938 11:19:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.938 11:19:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.938 11:19:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.938 11:19:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.938 11:19:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.938 11:19:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.938 11:19:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:57.938 11:19:43 json_config -- scripts/common.sh@345 -- # : 1 00:04:57.938 11:19:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.938 11:19:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.938 11:19:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:57.938 11:19:43 json_config -- scripts/common.sh@353 -- # local d=1 00:04:57.938 11:19:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.938 11:19:43 json_config -- scripts/common.sh@355 -- # echo 1 00:04:57.938 11:19:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.938 11:19:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:57.938 11:19:43 json_config -- scripts/common.sh@353 -- # local d=2 00:04:57.938 11:19:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.938 11:19:43 json_config -- scripts/common.sh@355 -- # echo 2 00:04:57.938 11:19:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.938 11:19:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.938 11:19:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.938 11:19:43 json_config -- scripts/common.sh@368 -- # return 0 00:04:57.938 11:19:43 json_config -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.938 11:19:43 json_config -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:57.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.938 --rc genhtml_branch_coverage=1 00:04:57.938 --rc genhtml_function_coverage=1 00:04:57.938 --rc genhtml_legend=1 00:04:57.938 --rc geninfo_all_blocks=1 00:04:57.938 --rc geninfo_unexecuted_blocks=1 00:04:57.938 00:04:57.938 ' 00:04:57.938 11:19:43 json_config -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:57.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.938 --rc genhtml_branch_coverage=1 00:04:57.938 --rc genhtml_function_coverage=1 00:04:57.938 --rc genhtml_legend=1 00:04:57.938 --rc geninfo_all_blocks=1 00:04:57.938 --rc geninfo_unexecuted_blocks=1 00:04:57.938 00:04:57.938 ' 00:04:57.938 11:19:43 json_config -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:57.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.938 --rc genhtml_branch_coverage=1 00:04:57.938 --rc genhtml_function_coverage=1 00:04:57.938 --rc genhtml_legend=1 00:04:57.938 --rc geninfo_all_blocks=1 00:04:57.938 --rc geninfo_unexecuted_blocks=1 00:04:57.938 00:04:57.938 ' 00:04:57.938 11:19:43 json_config -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:57.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.938 --rc genhtml_branch_coverage=1 00:04:57.938 --rc genhtml_function_coverage=1 00:04:57.938 --rc genhtml_legend=1 00:04:57.938 --rc geninfo_all_blocks=1 00:04:57.938 --rc geninfo_unexecuted_blocks=1 00:04:57.938 00:04:57.938 ' 00:04:57.938 11:19:43 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:57.938 11:19:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:57.938 11:19:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.938 11:19:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d9d229ec-5263-47e1-b343-3b59e9c68251 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d9d229ec-5263-47e1-b343-3b59e9c68251 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:57.939 11:19:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.939 11:19:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.939 11:19:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.939 11:19:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.939 11:19:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.939 11:19:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.939 11:19:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.939 11:19:43 json_config -- paths/export.sh@5 -- # export PATH 00:04:57.939 11:19:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@51 -- # : 0 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:57.939 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:57.939 11:19:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:57.939 11:19:43 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:57.939 11:19:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:57.939 11:19:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:57.939 11:19:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:57.939 WARNING: No tests are enabled so not running JSON configuration tests 00:04:57.939 11:19:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:57.939 11:19:43 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:57.939 11:19:43 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:57.939 00:04:57.939 real 0m0.144s 00:04:57.939 user 0m0.086s 00:04:57.939 sys 0m0.057s 00:04:57.939 11:19:43 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.939 11:19:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.939 ************************************ 00:04:57.939 END TEST json_config 00:04:57.939 ************************************ 00:04:57.939 11:19:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:57.939 11:19:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.939 11:19:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.939 11:19:43 -- common/autotest_common.sh@10 -- # set +x 00:04:57.939 ************************************ 00:04:57.939 START TEST json_config_extra_key 00:04:57.939 ************************************ 00:04:57.939 11:19:43 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:57.939 11:19:43 json_config_extra_key -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:57.939 11:19:43 json_config_extra_key -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:57.939 11:19:43 json_config_extra_key -- common/autotest_common.sh@1689 -- # lcov --version 00:04:58.198 11:19:43 json_config_extra_key -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.198 11:19:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:58.198 11:19:43 json_config_extra_key -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.198 11:19:43 json_config_extra_key -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:58.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.199 --rc genhtml_branch_coverage=1 00:04:58.199 --rc genhtml_function_coverage=1 00:04:58.199 --rc genhtml_legend=1 00:04:58.199 --rc geninfo_all_blocks=1 00:04:58.199 --rc geninfo_unexecuted_blocks=1 00:04:58.199 00:04:58.199 ' 00:04:58.199 11:19:43 json_config_extra_key -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:58.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.199 --rc genhtml_branch_coverage=1 00:04:58.199 --rc genhtml_function_coverage=1 00:04:58.199 --rc genhtml_legend=1 00:04:58.199 --rc geninfo_all_blocks=1 00:04:58.199 --rc geninfo_unexecuted_blocks=1 00:04:58.199 00:04:58.199 ' 00:04:58.199 11:19:43 json_config_extra_key -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:58.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.199 --rc genhtml_branch_coverage=1 00:04:58.199 --rc genhtml_function_coverage=1 00:04:58.199 --rc genhtml_legend=1 00:04:58.199 --rc geninfo_all_blocks=1 00:04:58.199 --rc geninfo_unexecuted_blocks=1 00:04:58.199 00:04:58.199 ' 00:04:58.199 11:19:43 json_config_extra_key -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:58.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.199 --rc genhtml_branch_coverage=1 00:04:58.199 --rc genhtml_function_coverage=1 00:04:58.199 --rc genhtml_legend=1 00:04:58.199 --rc geninfo_all_blocks=1 00:04:58.199 --rc geninfo_unexecuted_blocks=1 00:04:58.199 00:04:58.199 ' 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d9d229ec-5263-47e1-b343-3b59e9c68251 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d9d229ec-5263-47e1-b343-3b59e9c68251 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:58.199 11:19:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.199 11:19:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.199 11:19:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.199 11:19:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.199 11:19:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.199 11:19:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.199 11:19:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.199 11:19:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:58.199 11:19:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.199 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.199 11:19:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.199 INFO: launching applications... 00:04:58.199 11:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:58.199 11:19:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:58.199 11:19:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:58.199 11:19:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.199 11:19:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.199 11:19:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.199 11:19:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.199 11:19:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.199 11:19:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57748 00:04:58.199 11:19:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.199 Waiting for target to run... 00:04:58.199 11:19:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57748 /var/tmp/spdk_tgt.sock 00:04:58.199 11:19:43 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57748 ']' 00:04:58.199 11:19:43 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.199 11:19:43 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.199 11:19:43 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.200 11:19:43 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.200 11:19:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.200 11:19:43 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:58.200 [2024-10-27 11:19:43.369223] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:04:58.200 [2024-10-27 11:19:43.369815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57748 ] 00:04:58.458 [2024-10-27 11:19:43.693386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.717 [2024-10-27 11:19:43.786603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.284 00:04:59.284 INFO: shutting down applications... 00:04:59.284 11:19:44 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.284 11:19:44 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:59.284 11:19:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:59.284 11:19:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:59.284 11:19:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:59.284 11:19:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:59.284 11:19:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.284 11:19:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57748 ]] 00:04:59.284 11:19:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57748 00:04:59.284 11:19:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.284 11:19:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.284 11:19:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:04:59.284 11:19:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.542 11:19:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.542 11:19:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.542 11:19:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:04:59.542 11:19:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.108 11:19:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.108 11:19:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.108 11:19:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:05:00.108 11:19:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.673 11:19:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.673 11:19:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.673 11:19:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:05:00.673 11:19:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.242 11:19:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.242 11:19:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.242 11:19:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:05:01.242 11:19:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:01.242 11:19:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:01.242 11:19:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:01.242 SPDK target shutdown done 00:05:01.242 Success 00:05:01.242 11:19:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:01.242 11:19:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:01.242 00:05:01.242 real 0m3.179s 00:05:01.242 user 0m2.718s 00:05:01.242 sys 0m0.414s 00:05:01.242 11:19:46 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.242 ************************************ 00:05:01.242 END TEST json_config_extra_key 00:05:01.242 ************************************ 00:05:01.242 11:19:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:01.242 11:19:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.242 11:19:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.242 11:19:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.242 11:19:46 -- common/autotest_common.sh@10 -- # set +x 00:05:01.242 ************************************ 00:05:01.242 START TEST alias_rpc 00:05:01.242 ************************************ 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:01.242 * Looking for test storage... 00:05:01.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.242 11:19:46 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:01.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.242 --rc genhtml_branch_coverage=1 00:05:01.242 --rc genhtml_function_coverage=1 00:05:01.242 --rc genhtml_legend=1 00:05:01.242 --rc geninfo_all_blocks=1 00:05:01.242 --rc geninfo_unexecuted_blocks=1 00:05:01.242 00:05:01.242 ' 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:01.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.242 --rc genhtml_branch_coverage=1 00:05:01.242 --rc genhtml_function_coverage=1 00:05:01.242 --rc genhtml_legend=1 00:05:01.242 --rc geninfo_all_blocks=1 00:05:01.242 --rc geninfo_unexecuted_blocks=1 00:05:01.242 00:05:01.242 ' 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:01.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.242 --rc genhtml_branch_coverage=1 00:05:01.242 --rc genhtml_function_coverage=1 00:05:01.242 --rc genhtml_legend=1 00:05:01.242 --rc geninfo_all_blocks=1 00:05:01.242 --rc geninfo_unexecuted_blocks=1 00:05:01.242 00:05:01.242 ' 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:01.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.242 --rc genhtml_branch_coverage=1 00:05:01.242 --rc genhtml_function_coverage=1 00:05:01.242 --rc genhtml_legend=1 00:05:01.242 --rc geninfo_all_blocks=1 00:05:01.242 --rc geninfo_unexecuted_blocks=1 00:05:01.242 00:05:01.242 ' 00:05:01.242 11:19:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:01.242 11:19:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57841 00:05:01.242 11:19:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57841 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57841 ']' 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.242 11:19:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.242 11:19:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.501 [2024-10-27 11:19:46.605286] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:01.501 [2024-10-27 11:19:46.606795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57841 ] 00:05:01.501 [2024-10-27 11:19:46.772744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.760 [2024-10-27 11:19:46.848237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.327 11:19:47 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.327 11:19:47 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:02.327 11:19:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:02.586 11:19:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57841 00:05:02.586 11:19:47 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57841 ']' 00:05:02.586 11:19:47 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57841 00:05:02.586 11:19:47 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:02.586 11:19:47 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.586 11:19:47 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57841 00:05:02.586 killing process with pid 57841 00:05:02.586 11:19:47 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.586 11:19:47 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.586 11:19:47 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57841' 00:05:02.586 11:19:47 alias_rpc -- common/autotest_common.sh@969 -- # kill 57841 00:05:02.586 11:19:47 alias_rpc -- common/autotest_common.sh@974 -- # wait 57841 00:05:04.017 ************************************ 00:05:04.017 END TEST alias_rpc 00:05:04.017 ************************************ 00:05:04.017 00:05:04.017 real 0m2.462s 00:05:04.017 user 0m2.547s 00:05:04.017 sys 0m0.409s 00:05:04.017 11:19:48 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.017 11:19:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.017 11:19:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:04.017 11:19:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.017 11:19:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.017 11:19:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.017 11:19:48 -- common/autotest_common.sh@10 -- # set +x 00:05:04.017 ************************************ 00:05:04.017 START TEST spdkcli_tcp 00:05:04.017 ************************************ 00:05:04.017 11:19:48 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.017 * Looking for test storage... 00:05:04.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:04.017 11:19:48 spdkcli_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:04.017 11:19:48 spdkcli_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:04.017 11:19:48 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.017 11:19:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:04.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.017 --rc genhtml_branch_coverage=1 00:05:04.017 --rc genhtml_function_coverage=1 00:05:04.017 --rc genhtml_legend=1 00:05:04.017 --rc geninfo_all_blocks=1 00:05:04.017 --rc geninfo_unexecuted_blocks=1 00:05:04.017 00:05:04.017 ' 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:04.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.017 --rc genhtml_branch_coverage=1 00:05:04.017 --rc genhtml_function_coverage=1 00:05:04.017 --rc genhtml_legend=1 00:05:04.017 --rc geninfo_all_blocks=1 00:05:04.017 --rc geninfo_unexecuted_blocks=1 00:05:04.017 00:05:04.017 ' 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:04.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.017 --rc genhtml_branch_coverage=1 00:05:04.017 --rc genhtml_function_coverage=1 00:05:04.017 --rc genhtml_legend=1 00:05:04.017 --rc geninfo_all_blocks=1 00:05:04.017 --rc geninfo_unexecuted_blocks=1 00:05:04.017 00:05:04.017 ' 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:04.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.017 --rc genhtml_branch_coverage=1 00:05:04.017 --rc genhtml_function_coverage=1 00:05:04.017 --rc genhtml_legend=1 00:05:04.017 --rc geninfo_all_blocks=1 00:05:04.017 --rc geninfo_unexecuted_blocks=1 00:05:04.017 00:05:04.017 ' 00:05:04.017 11:19:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:04.017 11:19:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:04.017 11:19:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:04.017 11:19:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:04.017 11:19:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:04.017 11:19:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:04.017 11:19:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.017 11:19:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57932 00:05:04.017 11:19:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57932 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57932 ']' 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.017 11:19:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.017 11:19:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:04.017 [2024-10-27 11:19:49.105258] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:04.017 [2024-10-27 11:19:49.105404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57932 ] 00:05:04.017 [2024-10-27 11:19:49.261925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.275 [2024-10-27 11:19:49.339692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.275 [2024-10-27 11:19:49.339798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.841 11:19:49 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.841 11:19:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:04.841 11:19:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:04.841 11:19:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57949 00:05:04.841 11:19:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:04.841 [ 00:05:04.841 "bdev_malloc_delete", 00:05:04.841 "bdev_malloc_create", 00:05:04.841 "bdev_null_resize", 00:05:04.841 "bdev_null_delete", 00:05:04.841 "bdev_null_create", 00:05:04.841 "bdev_nvme_cuse_unregister", 00:05:04.841 "bdev_nvme_cuse_register", 00:05:04.841 "bdev_opal_new_user", 00:05:04.841 "bdev_opal_set_lock_state", 00:05:04.841 "bdev_opal_delete", 00:05:04.841 "bdev_opal_get_info", 00:05:04.841 "bdev_opal_create", 00:05:04.841 "bdev_nvme_opal_revert", 00:05:04.841 "bdev_nvme_opal_init", 00:05:04.841 "bdev_nvme_send_cmd", 00:05:04.841 "bdev_nvme_set_keys", 00:05:04.841 "bdev_nvme_get_path_iostat", 00:05:04.841 "bdev_nvme_get_mdns_discovery_info", 00:05:04.841 "bdev_nvme_stop_mdns_discovery", 00:05:04.841 "bdev_nvme_start_mdns_discovery", 00:05:04.841 "bdev_nvme_set_multipath_policy", 00:05:04.841 "bdev_nvme_set_preferred_path", 00:05:04.841 "bdev_nvme_get_io_paths", 00:05:04.841 "bdev_nvme_remove_error_injection", 00:05:04.841 "bdev_nvme_add_error_injection", 00:05:04.841 "bdev_nvme_get_discovery_info", 00:05:04.841 "bdev_nvme_stop_discovery", 00:05:04.841 "bdev_nvme_start_discovery", 00:05:04.841 "bdev_nvme_get_controller_health_info", 00:05:04.841 "bdev_nvme_disable_controller", 00:05:04.841 "bdev_nvme_enable_controller", 00:05:04.841 "bdev_nvme_reset_controller", 00:05:04.841 "bdev_nvme_get_transport_statistics", 00:05:04.841 "bdev_nvme_apply_firmware", 00:05:04.841 "bdev_nvme_detach_controller", 00:05:04.841 "bdev_nvme_get_controllers", 00:05:04.841 "bdev_nvme_attach_controller", 00:05:04.841 "bdev_nvme_set_hotplug", 00:05:04.841 "bdev_nvme_set_options", 00:05:04.841 "bdev_passthru_delete", 00:05:04.841 "bdev_passthru_create", 00:05:04.841 "bdev_lvol_set_parent_bdev", 00:05:04.841 "bdev_lvol_set_parent", 00:05:04.841 "bdev_lvol_check_shallow_copy", 00:05:04.841 "bdev_lvol_start_shallow_copy", 00:05:04.841 "bdev_lvol_grow_lvstore", 00:05:04.841 "bdev_lvol_get_lvols", 00:05:04.841 "bdev_lvol_get_lvstores", 00:05:04.841 "bdev_lvol_delete", 00:05:04.841 "bdev_lvol_set_read_only", 00:05:04.841 "bdev_lvol_resize", 00:05:04.842 "bdev_lvol_decouple_parent", 00:05:04.842 "bdev_lvol_inflate", 00:05:04.842 "bdev_lvol_rename", 00:05:04.842 "bdev_lvol_clone_bdev", 00:05:04.842 "bdev_lvol_clone", 00:05:04.842 "bdev_lvol_snapshot", 00:05:04.842 "bdev_lvol_create", 00:05:04.842 "bdev_lvol_delete_lvstore", 00:05:04.842 "bdev_lvol_rename_lvstore", 00:05:04.842 "bdev_lvol_create_lvstore", 00:05:04.842 "bdev_raid_set_options", 00:05:04.842 "bdev_raid_remove_base_bdev", 00:05:04.842 "bdev_raid_add_base_bdev", 00:05:04.842 "bdev_raid_delete", 00:05:04.842 "bdev_raid_create", 00:05:04.842 "bdev_raid_get_bdevs", 00:05:04.842 "bdev_error_inject_error", 00:05:04.842 "bdev_error_delete", 00:05:04.842 "bdev_error_create", 00:05:04.842 "bdev_split_delete", 00:05:04.842 "bdev_split_create", 00:05:04.842 "bdev_delay_delete", 00:05:04.842 "bdev_delay_create", 00:05:04.842 "bdev_delay_update_latency", 00:05:04.842 "bdev_zone_block_delete", 00:05:04.842 "bdev_zone_block_create", 00:05:04.842 "blobfs_create", 00:05:04.842 "blobfs_detect", 00:05:04.842 "blobfs_set_cache_size", 00:05:04.842 "bdev_xnvme_delete", 00:05:04.842 "bdev_xnvme_create", 00:05:04.842 "bdev_aio_delete", 00:05:04.842 "bdev_aio_rescan", 00:05:04.842 "bdev_aio_create", 00:05:04.842 "bdev_ftl_set_property", 00:05:04.842 "bdev_ftl_get_properties", 00:05:04.842 "bdev_ftl_get_stats", 00:05:04.842 "bdev_ftl_unmap", 00:05:04.842 "bdev_ftl_unload", 00:05:04.842 "bdev_ftl_delete", 00:05:04.842 "bdev_ftl_load", 00:05:04.842 "bdev_ftl_create", 00:05:04.842 "bdev_virtio_attach_controller", 00:05:04.842 "bdev_virtio_scsi_get_devices", 00:05:04.842 "bdev_virtio_detach_controller", 00:05:04.842 "bdev_virtio_blk_set_hotplug", 00:05:04.842 "bdev_iscsi_delete", 00:05:04.842 "bdev_iscsi_create", 00:05:04.842 "bdev_iscsi_set_options", 00:05:04.842 "accel_error_inject_error", 00:05:04.842 "ioat_scan_accel_module", 00:05:04.842 "dsa_scan_accel_module", 00:05:04.842 "iaa_scan_accel_module", 00:05:04.842 "keyring_file_remove_key", 00:05:04.842 "keyring_file_add_key", 00:05:04.842 "keyring_linux_set_options", 00:05:04.842 "fsdev_aio_delete", 00:05:04.842 "fsdev_aio_create", 00:05:04.842 "iscsi_get_histogram", 00:05:04.842 "iscsi_enable_histogram", 00:05:04.842 "iscsi_set_options", 00:05:04.842 "iscsi_get_auth_groups", 00:05:04.842 "iscsi_auth_group_remove_secret", 00:05:04.842 "iscsi_auth_group_add_secret", 00:05:04.842 "iscsi_delete_auth_group", 00:05:04.842 "iscsi_create_auth_group", 00:05:04.842 "iscsi_set_discovery_auth", 00:05:04.842 "iscsi_get_options", 00:05:04.842 "iscsi_target_node_request_logout", 00:05:04.842 "iscsi_target_node_set_redirect", 00:05:04.842 "iscsi_target_node_set_auth", 00:05:04.842 "iscsi_target_node_add_lun", 00:05:04.842 "iscsi_get_stats", 00:05:04.842 "iscsi_get_connections", 00:05:04.842 "iscsi_portal_group_set_auth", 00:05:04.842 "iscsi_start_portal_group", 00:05:04.842 "iscsi_delete_portal_group", 00:05:04.842 "iscsi_create_portal_group", 00:05:04.842 "iscsi_get_portal_groups", 00:05:04.842 "iscsi_delete_target_node", 00:05:04.842 "iscsi_target_node_remove_pg_ig_maps", 00:05:04.842 "iscsi_target_node_add_pg_ig_maps", 00:05:04.842 "iscsi_create_target_node", 00:05:04.842 "iscsi_get_target_nodes", 00:05:04.842 "iscsi_delete_initiator_group", 00:05:04.842 "iscsi_initiator_group_remove_initiators", 00:05:04.842 "iscsi_initiator_group_add_initiators", 00:05:04.842 "iscsi_create_initiator_group", 00:05:04.842 "iscsi_get_initiator_groups", 00:05:04.842 "nvmf_set_crdt", 00:05:04.842 "nvmf_set_config", 00:05:04.842 "nvmf_set_max_subsystems", 00:05:04.842 "nvmf_stop_mdns_prr", 00:05:04.842 "nvmf_publish_mdns_prr", 00:05:04.842 "nvmf_subsystem_get_listeners", 00:05:04.842 "nvmf_subsystem_get_qpairs", 00:05:04.842 "nvmf_subsystem_get_controllers", 00:05:04.842 "nvmf_get_stats", 00:05:04.842 "nvmf_get_transports", 00:05:04.842 "nvmf_create_transport", 00:05:04.842 "nvmf_get_targets", 00:05:04.842 "nvmf_delete_target", 00:05:04.842 "nvmf_create_target", 00:05:04.842 "nvmf_subsystem_allow_any_host", 00:05:04.842 "nvmf_subsystem_set_keys", 00:05:04.842 "nvmf_subsystem_remove_host", 00:05:04.842 "nvmf_subsystem_add_host", 00:05:04.842 "nvmf_ns_remove_host", 00:05:04.842 "nvmf_ns_add_host", 00:05:04.842 "nvmf_subsystem_remove_ns", 00:05:04.842 "nvmf_subsystem_set_ns_ana_group", 00:05:04.842 "nvmf_subsystem_add_ns", 00:05:04.842 "nvmf_subsystem_listener_set_ana_state", 00:05:04.842 "nvmf_discovery_get_referrals", 00:05:04.842 "nvmf_discovery_remove_referral", 00:05:04.842 "nvmf_discovery_add_referral", 00:05:04.842 "nvmf_subsystem_remove_listener", 00:05:04.842 "nvmf_subsystem_add_listener", 00:05:04.842 "nvmf_delete_subsystem", 00:05:04.842 "nvmf_create_subsystem", 00:05:04.842 "nvmf_get_subsystems", 00:05:04.842 "env_dpdk_get_mem_stats", 00:05:04.842 "nbd_get_disks", 00:05:04.842 "nbd_stop_disk", 00:05:04.842 "nbd_start_disk", 00:05:04.842 "ublk_recover_disk", 00:05:04.842 "ublk_get_disks", 00:05:04.842 "ublk_stop_disk", 00:05:04.842 "ublk_start_disk", 00:05:04.842 "ublk_destroy_target", 00:05:04.842 "ublk_create_target", 00:05:04.842 "virtio_blk_create_transport", 00:05:04.842 "virtio_blk_get_transports", 00:05:04.842 "vhost_controller_set_coalescing", 00:05:04.842 "vhost_get_controllers", 00:05:04.842 "vhost_delete_controller", 00:05:04.842 "vhost_create_blk_controller", 00:05:04.842 "vhost_scsi_controller_remove_target", 00:05:04.842 "vhost_scsi_controller_add_target", 00:05:04.842 "vhost_start_scsi_controller", 00:05:04.842 "vhost_create_scsi_controller", 00:05:04.842 "thread_set_cpumask", 00:05:04.842 "scheduler_set_options", 00:05:04.842 "framework_get_governor", 00:05:04.842 "framework_get_scheduler", 00:05:04.842 "framework_set_scheduler", 00:05:04.842 "framework_get_reactors", 00:05:04.842 "thread_get_io_channels", 00:05:04.842 "thread_get_pollers", 00:05:04.842 "thread_get_stats", 00:05:04.842 "framework_monitor_context_switch", 00:05:04.842 "spdk_kill_instance", 00:05:04.842 "log_enable_timestamps", 00:05:04.842 "log_get_flags", 00:05:04.842 "log_clear_flag", 00:05:04.842 "log_set_flag", 00:05:04.842 "log_get_level", 00:05:04.842 "log_set_level", 00:05:04.842 "log_get_print_level", 00:05:04.842 "log_set_print_level", 00:05:04.842 "framework_enable_cpumask_locks", 00:05:04.842 "framework_disable_cpumask_locks", 00:05:04.842 "framework_wait_init", 00:05:04.842 "framework_start_init", 00:05:04.842 "scsi_get_devices", 00:05:04.842 "bdev_get_histogram", 00:05:04.842 "bdev_enable_histogram", 00:05:04.842 "bdev_set_qos_limit", 00:05:04.842 "bdev_set_qd_sampling_period", 00:05:04.842 "bdev_get_bdevs", 00:05:04.842 "bdev_reset_iostat", 00:05:04.842 "bdev_get_iostat", 00:05:04.842 "bdev_examine", 00:05:04.842 "bdev_wait_for_examine", 00:05:04.842 "bdev_set_options", 00:05:04.842 "accel_get_stats", 00:05:04.842 "accel_set_options", 00:05:04.842 "accel_set_driver", 00:05:04.842 "accel_crypto_key_destroy", 00:05:04.842 "accel_crypto_keys_get", 00:05:04.842 "accel_crypto_key_create", 00:05:04.842 "accel_assign_opc", 00:05:04.842 "accel_get_module_info", 00:05:04.842 "accel_get_opc_assignments", 00:05:04.842 "vmd_rescan", 00:05:04.842 "vmd_remove_device", 00:05:04.842 "vmd_enable", 00:05:04.842 "sock_get_default_impl", 00:05:04.843 "sock_set_default_impl", 00:05:04.843 "sock_impl_set_options", 00:05:04.843 "sock_impl_get_options", 00:05:04.843 "iobuf_get_stats", 00:05:04.843 "iobuf_set_options", 00:05:04.843 "keyring_get_keys", 00:05:04.843 "framework_get_pci_devices", 00:05:04.843 "framework_get_config", 00:05:04.843 "framework_get_subsystems", 00:05:04.843 "fsdev_set_opts", 00:05:04.843 "fsdev_get_opts", 00:05:04.843 "trace_get_info", 00:05:04.843 "trace_get_tpoint_group_mask", 00:05:04.843 "trace_disable_tpoint_group", 00:05:04.843 "trace_enable_tpoint_group", 00:05:04.843 "trace_clear_tpoint_mask", 00:05:04.843 "trace_set_tpoint_mask", 00:05:04.843 "notify_get_notifications", 00:05:04.843 "notify_get_types", 00:05:04.843 "spdk_get_version", 00:05:04.843 "rpc_get_methods" 00:05:04.843 ] 00:05:05.101 11:19:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.101 11:19:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:05.101 11:19:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57932 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57932 ']' 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57932 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57932 00:05:05.101 killing process with pid 57932 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57932' 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57932 00:05:05.101 11:19:50 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57932 00:05:06.477 ************************************ 00:05:06.477 END TEST spdkcli_tcp 00:05:06.477 ************************************ 00:05:06.477 00:05:06.477 real 0m2.451s 00:05:06.477 user 0m4.399s 00:05:06.477 sys 0m0.420s 00:05:06.477 11:19:51 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.477 11:19:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.477 11:19:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.477 11:19:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.477 11:19:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.477 11:19:51 -- common/autotest_common.sh@10 -- # set +x 00:05:06.477 ************************************ 00:05:06.477 START TEST dpdk_mem_utility 00:05:06.477 ************************************ 00:05:06.477 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.477 * Looking for test storage... 00:05:06.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:06.477 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:06.477 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:06.477 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lcov --version 00:05:06.477 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:06.477 11:19:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.477 11:19:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.477 11:19:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.477 11:19:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.477 11:19:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.478 11:19:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:06.478 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.478 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.478 --rc genhtml_branch_coverage=1 00:05:06.478 --rc genhtml_function_coverage=1 00:05:06.478 --rc genhtml_legend=1 00:05:06.478 --rc geninfo_all_blocks=1 00:05:06.478 --rc geninfo_unexecuted_blocks=1 00:05:06.478 00:05:06.478 ' 00:05:06.478 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.478 --rc genhtml_branch_coverage=1 00:05:06.478 --rc genhtml_function_coverage=1 00:05:06.478 --rc genhtml_legend=1 00:05:06.478 --rc geninfo_all_blocks=1 00:05:06.478 --rc geninfo_unexecuted_blocks=1 00:05:06.478 00:05:06.478 ' 00:05:06.478 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.478 --rc genhtml_branch_coverage=1 00:05:06.478 --rc genhtml_function_coverage=1 00:05:06.478 --rc genhtml_legend=1 00:05:06.478 --rc geninfo_all_blocks=1 00:05:06.478 --rc geninfo_unexecuted_blocks=1 00:05:06.478 00:05:06.478 ' 00:05:06.478 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.478 --rc genhtml_branch_coverage=1 00:05:06.478 --rc genhtml_function_coverage=1 00:05:06.478 --rc genhtml_legend=1 00:05:06.478 --rc geninfo_all_blocks=1 00:05:06.478 --rc geninfo_unexecuted_blocks=1 00:05:06.478 00:05:06.478 ' 00:05:06.478 11:19:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:06.478 11:19:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58037 00:05:06.478 11:19:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.478 11:19:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58037 00:05:06.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.478 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58037 ']' 00:05:06.478 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.478 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.478 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.478 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.478 11:19:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.478 [2024-10-27 11:19:51.607542] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:06.478 [2024-10-27 11:19:51.607851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58037 ] 00:05:06.737 [2024-10-27 11:19:51.762288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.737 [2024-10-27 11:19:51.840536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.305 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.305 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:07.305 11:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:07.305 11:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:07.305 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.305 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.305 { 00:05:07.305 "filename": "/tmp/spdk_mem_dump.txt" 00:05:07.305 } 00:05:07.305 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.305 11:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:07.305 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:07.305 1 heaps totaling size 816.000000 MiB 00:05:07.305 size: 816.000000 MiB heap id: 0 00:05:07.305 end heaps---------- 00:05:07.305 9 mempools totaling size 595.772034 MiB 00:05:07.305 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.305 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.305 size: 92.545471 MiB name: bdev_io_58037 00:05:07.305 size: 50.003479 MiB name: msgpool_58037 00:05:07.305 size: 36.509338 MiB name: fsdev_io_58037 00:05:07.305 size: 21.763794 MiB name: PDU_Pool 00:05:07.305 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.305 size: 4.133484 MiB name: evtpool_58037 00:05:07.305 size: 0.026123 MiB name: Session_Pool 00:05:07.305 end mempools------- 00:05:07.305 6 memzones totaling size 4.142822 MiB 00:05:07.306 size: 1.000366 MiB name: RG_ring_0_58037 00:05:07.306 size: 1.000366 MiB name: RG_ring_1_58037 00:05:07.306 size: 1.000366 MiB name: RG_ring_4_58037 00:05:07.306 size: 1.000366 MiB name: RG_ring_5_58037 00:05:07.306 size: 0.125366 MiB name: RG_ring_2_58037 00:05:07.306 size: 0.015991 MiB name: RG_ring_3_58037 00:05:07.306 end memzones------- 00:05:07.306 11:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.306 heap id: 0 total size: 816.000000 MiB number of busy elements: 319 number of free elements: 18 00:05:07.306 list of free elements. size: 16.790405 MiB 00:05:07.306 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:07.306 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:07.306 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:07.306 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:07.306 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:07.306 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:07.306 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:07.306 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:07.306 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:07.306 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:07.306 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:07.306 element at address: 0x20001ac00000 with size: 0.558777 MiB 00:05:07.306 element at address: 0x200000c00000 with size: 0.491638 MiB 00:05:07.306 element at address: 0x200018e00000 with size: 0.488464 MiB 00:05:07.306 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:07.306 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:07.306 element at address: 0x200028000000 with size: 0.390686 MiB 00:05:07.306 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:07.306 list of standard malloc elements. size: 199.288696 MiB 00:05:07.306 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:07.306 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:07.306 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:07.306 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:07.306 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:07.306 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:07.306 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:07.306 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:07.306 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:07.306 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:07.306 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:07.306 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:07.306 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:07.306 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:07.306 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:07.307 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:07.307 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8f0c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:07.307 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:07.307 element at address: 0x200028064140 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806ae00 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:07.307 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:07.308 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:07.308 list of memzone associated elements. size: 599.920898 MiB 00:05:07.308 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:07.308 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.308 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:07.308 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.308 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:07.308 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58037_0 00:05:07.308 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:07.308 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58037_0 00:05:07.308 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:07.308 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58037_0 00:05:07.308 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:07.308 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.308 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:07.308 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.308 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:07.308 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58037_0 00:05:07.308 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:07.308 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58037 00:05:07.308 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:07.308 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58037 00:05:07.308 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:07.308 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.308 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:07.308 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.308 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:07.308 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.308 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:07.308 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.308 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:07.308 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58037 00:05:07.308 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:07.308 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58037 00:05:07.308 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:07.308 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58037 00:05:07.308 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:07.308 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58037 00:05:07.308 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:07.308 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58037 00:05:07.308 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:07.308 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58037 00:05:07.308 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:07.308 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.308 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:07.308 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.308 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:07.308 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.308 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:07.308 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58037 00:05:07.308 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:07.308 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58037 00:05:07.308 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:07.308 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.308 element at address: 0x200028064240 with size: 0.023804 MiB 00:05:07.308 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.308 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:07.308 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58037 00:05:07.308 element at address: 0x20002806a3c0 with size: 0.002502 MiB 00:05:07.308 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.308 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:07.308 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58037 00:05:07.308 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:07.308 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58037 00:05:07.308 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:07.308 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58037 00:05:07.308 element at address: 0x20002806af00 with size: 0.000366 MiB 00:05:07.308 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.308 11:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.308 11:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58037 00:05:07.308 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58037 ']' 00:05:07.308 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58037 00:05:07.308 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:07.308 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.308 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58037 00:05:07.308 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:07.308 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:07.308 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58037' 00:05:07.308 killing process with pid 58037 00:05:07.308 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58037 00:05:07.308 11:19:52 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58037 00:05:08.686 00:05:08.686 real 0m2.329s 00:05:08.686 user 0m2.353s 00:05:08.686 sys 0m0.377s 00:05:08.686 11:19:53 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.686 11:19:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:08.686 ************************************ 00:05:08.686 END TEST dpdk_mem_utility 00:05:08.686 ************************************ 00:05:08.686 11:19:53 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.686 11:19:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.686 11:19:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.686 11:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:08.686 ************************************ 00:05:08.686 START TEST event 00:05:08.686 ************************************ 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.686 * Looking for test storage... 00:05:08.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1689 -- # lcov --version 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:08.686 11:19:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.686 11:19:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.686 11:19:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.686 11:19:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.686 11:19:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.686 11:19:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.686 11:19:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.686 11:19:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.686 11:19:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.686 11:19:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.686 11:19:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.686 11:19:53 event -- scripts/common.sh@344 -- # case "$op" in 00:05:08.686 11:19:53 event -- scripts/common.sh@345 -- # : 1 00:05:08.686 11:19:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.686 11:19:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.686 11:19:53 event -- scripts/common.sh@365 -- # decimal 1 00:05:08.686 11:19:53 event -- scripts/common.sh@353 -- # local d=1 00:05:08.686 11:19:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.686 11:19:53 event -- scripts/common.sh@355 -- # echo 1 00:05:08.686 11:19:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.686 11:19:53 event -- scripts/common.sh@366 -- # decimal 2 00:05:08.686 11:19:53 event -- scripts/common.sh@353 -- # local d=2 00:05:08.686 11:19:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.686 11:19:53 event -- scripts/common.sh@355 -- # echo 2 00:05:08.686 11:19:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.686 11:19:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.686 11:19:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.686 11:19:53 event -- scripts/common.sh@368 -- # return 0 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.686 --rc genhtml_branch_coverage=1 00:05:08.686 --rc genhtml_function_coverage=1 00:05:08.686 --rc genhtml_legend=1 00:05:08.686 --rc geninfo_all_blocks=1 00:05:08.686 --rc geninfo_unexecuted_blocks=1 00:05:08.686 00:05:08.686 ' 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.686 --rc genhtml_branch_coverage=1 00:05:08.686 --rc genhtml_function_coverage=1 00:05:08.686 --rc genhtml_legend=1 00:05:08.686 --rc geninfo_all_blocks=1 00:05:08.686 --rc geninfo_unexecuted_blocks=1 00:05:08.686 00:05:08.686 ' 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.686 --rc genhtml_branch_coverage=1 00:05:08.686 --rc genhtml_function_coverage=1 00:05:08.686 --rc genhtml_legend=1 00:05:08.686 --rc geninfo_all_blocks=1 00:05:08.686 --rc geninfo_unexecuted_blocks=1 00:05:08.686 00:05:08.686 ' 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.686 --rc genhtml_branch_coverage=1 00:05:08.686 --rc genhtml_function_coverage=1 00:05:08.686 --rc genhtml_legend=1 00:05:08.686 --rc geninfo_all_blocks=1 00:05:08.686 --rc geninfo_unexecuted_blocks=1 00:05:08.686 00:05:08.686 ' 00:05:08.686 11:19:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:08.686 11:19:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:08.686 11:19:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:08.686 11:19:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.686 11:19:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.686 ************************************ 00:05:08.686 START TEST event_perf 00:05:08.686 ************************************ 00:05:08.686 11:19:53 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.686 Running I/O for 1 seconds...[2024-10-27 11:19:53.938764] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:08.686 [2024-10-27 11:19:53.938924] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58129 ] 00:05:08.945 [2024-10-27 11:19:54.094462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.945 [2024-10-27 11:19:54.173641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.945 [2024-10-27 11:19:54.173933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.945 [2024-10-27 11:19:54.174100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.945 [2024-10-27 11:19:54.174124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.322 Running I/O for 1 seconds... 00:05:10.322 lcore 0: 202318 00:05:10.322 lcore 1: 202321 00:05:10.322 lcore 2: 202323 00:05:10.322 lcore 3: 202317 00:05:10.322 done. 00:05:10.322 00:05:10.322 real 0m1.401s 00:05:10.322 user 0m4.194s 00:05:10.322 sys 0m0.089s 00:05:10.322 11:19:55 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.322 ************************************ 00:05:10.322 END TEST event_perf 00:05:10.322 ************************************ 00:05:10.322 11:19:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.322 11:19:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:10.322 11:19:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:10.322 11:19:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.322 11:19:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.322 ************************************ 00:05:10.322 START TEST event_reactor 00:05:10.322 ************************************ 00:05:10.322 11:19:55 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:10.322 [2024-10-27 11:19:55.395167] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:10.322 [2024-10-27 11:19:55.395250] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58163 ] 00:05:10.322 [2024-10-27 11:19:55.542777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.580 [2024-10-27 11:19:55.618160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.514 test_start 00:05:11.514 oneshot 00:05:11.514 tick 100 00:05:11.514 tick 100 00:05:11.514 tick 250 00:05:11.514 tick 100 00:05:11.514 tick 100 00:05:11.514 tick 250 00:05:11.514 tick 100 00:05:11.514 tick 500 00:05:11.514 tick 100 00:05:11.514 tick 100 00:05:11.514 tick 250 00:05:11.514 tick 100 00:05:11.514 tick 100 00:05:11.514 test_end 00:05:11.514 ************************************ 00:05:11.514 END TEST event_reactor 00:05:11.514 ************************************ 00:05:11.514 00:05:11.514 real 0m1.368s 00:05:11.514 user 0m1.206s 00:05:11.514 sys 0m0.055s 00:05:11.514 11:19:56 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.514 11:19:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:11.514 11:19:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.514 11:19:56 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:11.514 11:19:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.514 11:19:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.773 ************************************ 00:05:11.774 START TEST event_reactor_perf 00:05:11.774 ************************************ 00:05:11.774 11:19:56 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.774 [2024-10-27 11:19:56.823605] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:11.774 [2024-10-27 11:19:56.823709] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58199 ] 00:05:11.774 [2024-10-27 11:19:56.981863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.031 [2024-10-27 11:19:57.074539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.967 test_start 00:05:12.967 test_end 00:05:12.967 Performance: 355017 events per second 00:05:12.967 00:05:12.967 real 0m1.400s 00:05:12.967 user 0m1.230s 00:05:12.967 sys 0m0.063s 00:05:12.967 11:19:58 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.967 ************************************ 00:05:12.967 END TEST event_reactor_perf 00:05:12.967 ************************************ 00:05:12.967 11:19:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.967 11:19:58 event -- event/event.sh@49 -- # uname -s 00:05:12.967 11:19:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:13.226 11:19:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.226 11:19:58 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.226 11:19:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.226 11:19:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.226 ************************************ 00:05:13.226 START TEST event_scheduler 00:05:13.226 ************************************ 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.226 * Looking for test storage... 00:05:13.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@1689 -- # lcov --version 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.226 11:19:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:13.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.226 --rc genhtml_branch_coverage=1 00:05:13.226 --rc genhtml_function_coverage=1 00:05:13.226 --rc genhtml_legend=1 00:05:13.226 --rc geninfo_all_blocks=1 00:05:13.226 --rc geninfo_unexecuted_blocks=1 00:05:13.226 00:05:13.226 ' 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:13.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.226 --rc genhtml_branch_coverage=1 00:05:13.226 --rc genhtml_function_coverage=1 00:05:13.226 --rc genhtml_legend=1 00:05:13.226 --rc geninfo_all_blocks=1 00:05:13.226 --rc geninfo_unexecuted_blocks=1 00:05:13.226 00:05:13.226 ' 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:13.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.226 --rc genhtml_branch_coverage=1 00:05:13.226 --rc genhtml_function_coverage=1 00:05:13.226 --rc genhtml_legend=1 00:05:13.226 --rc geninfo_all_blocks=1 00:05:13.226 --rc geninfo_unexecuted_blocks=1 00:05:13.226 00:05:13.226 ' 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:13.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.226 --rc genhtml_branch_coverage=1 00:05:13.226 --rc genhtml_function_coverage=1 00:05:13.226 --rc genhtml_legend=1 00:05:13.226 --rc geninfo_all_blocks=1 00:05:13.226 --rc geninfo_unexecuted_blocks=1 00:05:13.226 00:05:13.226 ' 00:05:13.226 11:19:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:13.226 11:19:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58270 00:05:13.226 11:19:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:13.226 11:19:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.226 11:19:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58270 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58270 ']' 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.226 11:19:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.226 [2024-10-27 11:19:58.443990] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:13.226 [2024-10-27 11:19:58.444081] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58270 ] 00:05:13.485 [2024-10-27 11:19:58.599177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.485 [2024-10-27 11:19:58.699029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.485 [2024-10-27 11:19:58.699360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.485 [2024-10-27 11:19:58.699602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.485 [2024-10-27 11:19:58.699841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.419 11:19:59 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.419 11:19:59 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:14.419 11:19:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:14.419 11:19:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.419 11:19:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.419 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.419 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.419 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.419 POWER: Cannot set governor of lcore 0 to performance 00:05:14.419 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.419 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.419 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:14.419 POWER: Cannot set governor of lcore 0 to userspace 00:05:14.419 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:14.419 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:14.419 POWER: Unable to set Power Management Environment for lcore 0 00:05:14.419 [2024-10-27 11:19:59.356974] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:14.419 [2024-10-27 11:19:59.356989] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:14.419 [2024-10-27 11:19:59.356996] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:14.419 [2024-10-27 11:19:59.357008] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:14.419 [2024-10-27 11:19:59.357014] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:14.419 [2024-10-27 11:19:59.357021] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:14.419 11:19:59 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.419 11:19:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:14.419 11:19:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.419 11:19:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.419 [2024-10-27 11:19:59.530794] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:14.419 11:19:59 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.419 11:19:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:14.419 11:19:59 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.419 11:19:59 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.419 11:19:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.419 ************************************ 00:05:14.419 START TEST scheduler_create_thread 00:05:14.419 ************************************ 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.419 2 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.419 3 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.419 4 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.419 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 5 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 6 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 7 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 8 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 9 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 10 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.420 11:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.985 11:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.985 00:05:14.985 real 0m0.591s 00:05:14.985 user 0m0.010s 00:05:14.985 sys 0m0.008s 00:05:14.985 11:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.985 11:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.985 ************************************ 00:05:14.985 END TEST scheduler_create_thread 00:05:14.985 ************************************ 00:05:14.985 11:20:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.985 11:20:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58270 00:05:14.985 11:20:00 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58270 ']' 00:05:14.985 11:20:00 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58270 00:05:14.985 11:20:00 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:14.985 11:20:00 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.985 11:20:00 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58270 00:05:14.985 11:20:00 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:14.985 11:20:00 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:14.985 killing process with pid 58270 00:05:14.985 11:20:00 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58270' 00:05:14.985 11:20:00 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58270 00:05:14.985 11:20:00 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58270 00:05:15.550 [2024-10-27 11:20:00.610669] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:16.116 ************************************ 00:05:16.116 END TEST event_scheduler 00:05:16.116 ************************************ 00:05:16.116 00:05:16.116 real 0m2.911s 00:05:16.116 user 0m5.813s 00:05:16.116 sys 0m0.329s 00:05:16.116 11:20:01 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.116 11:20:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.116 11:20:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:16.116 11:20:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:16.116 11:20:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.116 11:20:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.116 11:20:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.116 ************************************ 00:05:16.116 START TEST app_repeat 00:05:16.116 ************************************ 00:05:16.116 11:20:01 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:16.116 Process app_repeat pid: 58354 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58354 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58354' 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.116 spdk_app_start Round 0 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:16.116 11:20:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58354 /var/tmp/spdk-nbd.sock 00:05:16.117 11:20:01 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:16.117 11:20:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58354 ']' 00:05:16.117 11:20:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.117 11:20:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.117 11:20:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.117 11:20:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.117 11:20:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.117 [2024-10-27 11:20:01.275966] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:16.117 [2024-10-27 11:20:01.276115] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58354 ] 00:05:16.375 [2024-10-27 11:20:01.453779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.375 [2024-10-27 11:20:01.551635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.375 [2024-10-27 11:20:01.551708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.941 11:20:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.941 11:20:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:16.941 11:20:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.200 Malloc0 00:05:17.200 11:20:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.457 Malloc1 00:05:17.458 11:20:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.458 11:20:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.716 /dev/nbd0 00:05:17.716 11:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.716 11:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.716 1+0 records in 00:05:17.716 1+0 records out 00:05:17.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163074 s, 25.1 MB/s 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.716 11:20:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.716 11:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.716 11:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.716 11:20:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.975 /dev/nbd1 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.975 1+0 records in 00:05:17.975 1+0 records out 00:05:17.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190814 s, 21.5 MB/s 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.975 11:20:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.975 { 00:05:17.975 "nbd_device": "/dev/nbd0", 00:05:17.975 "bdev_name": "Malloc0" 00:05:17.975 }, 00:05:17.975 { 00:05:17.975 "nbd_device": "/dev/nbd1", 00:05:17.975 "bdev_name": "Malloc1" 00:05:17.975 } 00:05:17.975 ]' 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.975 { 00:05:17.975 "nbd_device": "/dev/nbd0", 00:05:17.975 "bdev_name": "Malloc0" 00:05:17.975 }, 00:05:17.975 { 00:05:17.975 "nbd_device": "/dev/nbd1", 00:05:17.975 "bdev_name": "Malloc1" 00:05:17.975 } 00:05:17.975 ]' 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.975 /dev/nbd1' 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.975 /dev/nbd1' 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.975 256+0 records in 00:05:17.975 256+0 records out 00:05:17.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00807361 s, 130 MB/s 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.975 11:20:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.234 256+0 records in 00:05:18.234 256+0 records out 00:05:18.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020769 s, 50.5 MB/s 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.234 256+0 records in 00:05:18.234 256+0 records out 00:05:18.234 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018769 s, 55.9 MB/s 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.234 11:20:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.493 11:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.493 11:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.493 11:20:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.493 11:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.493 11:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.493 11:20:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.493 11:20:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.493 11:20:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.493 11:20:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.493 11:20:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.493 11:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.753 11:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.753 11:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.753 11:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.753 11:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.753 11:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.753 11:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.753 11:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.754 11:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.754 11:20:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.754 11:20:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.754 11:20:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.754 11:20:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.754 11:20:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.041 11:20:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.607 [2024-10-27 11:20:04.788312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.607 [2024-10-27 11:20:04.856931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.607 [2024-10-27 11:20:04.856931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.865 [2024-10-27 11:20:04.957960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.865 [2024-10-27 11:20:04.958008] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.419 spdk_app_start Round 1 00:05:22.419 11:20:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.419 11:20:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:22.419 11:20:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58354 /var/tmp/spdk-nbd.sock 00:05:22.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.419 11:20:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58354 ']' 00:05:22.419 11:20:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.419 11:20:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.419 11:20:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.419 11:20:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.419 11:20:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.419 11:20:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.419 11:20:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:22.419 11:20:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.419 Malloc0 00:05:22.419 11:20:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.678 Malloc1 00:05:22.678 11:20:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.678 11:20:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.937 /dev/nbd0 00:05:22.937 11:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.937 11:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.937 1+0 records in 00:05:22.937 1+0 records out 00:05:22.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029797 s, 13.7 MB/s 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.937 11:20:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.937 11:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.937 11:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.937 11:20:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.195 /dev/nbd1 00:05:23.195 11:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.195 11:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.195 1+0 records in 00:05:23.195 1+0 records out 00:05:23.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265795 s, 15.4 MB/s 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:23.195 11:20:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:23.195 11:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.195 11:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.195 11:20:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.195 11:20:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.195 11:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.453 { 00:05:23.453 "nbd_device": "/dev/nbd0", 00:05:23.453 "bdev_name": "Malloc0" 00:05:23.453 }, 00:05:23.453 { 00:05:23.453 "nbd_device": "/dev/nbd1", 00:05:23.453 "bdev_name": "Malloc1" 00:05:23.453 } 00:05:23.453 ]' 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.453 { 00:05:23.453 "nbd_device": "/dev/nbd0", 00:05:23.453 "bdev_name": "Malloc0" 00:05:23.453 }, 00:05:23.453 { 00:05:23.453 "nbd_device": "/dev/nbd1", 00:05:23.453 "bdev_name": "Malloc1" 00:05:23.453 } 00:05:23.453 ]' 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.453 /dev/nbd1' 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.453 /dev/nbd1' 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.453 256+0 records in 00:05:23.453 256+0 records out 00:05:23.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00821768 s, 128 MB/s 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.453 256+0 records in 00:05:23.453 256+0 records out 00:05:23.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188079 s, 55.8 MB/s 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.453 256+0 records in 00:05:23.453 256+0 records out 00:05:23.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149206 s, 70.3 MB/s 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.453 11:20:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.454 11:20:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.454 11:20:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.454 11:20:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.454 11:20:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.454 11:20:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.454 11:20:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.454 11:20:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.454 11:20:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.711 11:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.712 11:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.712 11:20:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.712 11:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.712 11:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.712 11:20:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.712 11:20:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.712 11:20:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.712 11:20:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.712 11:20:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.970 11:20:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.970 11:20:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.970 11:20:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.970 11:20:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.970 11:20:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.970 11:20:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.970 11:20:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.970 11:20:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.970 11:20:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.970 11:20:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.970 11:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.227 11:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.227 11:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.228 11:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.228 11:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.228 11:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.228 11:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.228 11:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.228 11:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.228 11:20:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.228 11:20:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.228 11:20:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.228 11:20:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.228 11:20:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.485 11:20:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.052 [2024-10-27 11:20:10.132943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.052 [2024-10-27 11:20:10.203775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.052 [2024-10-27 11:20:10.203873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.052 [2024-10-27 11:20:10.300425] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.052 [2024-10-27 11:20:10.300482] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.584 11:20:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.584 spdk_app_start Round 2 00:05:27.584 11:20:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:27.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.584 11:20:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58354 /var/tmp/spdk-nbd.sock 00:05:27.584 11:20:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58354 ']' 00:05:27.584 11:20:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.584 11:20:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.584 11:20:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.584 11:20:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.584 11:20:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.584 11:20:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.584 11:20:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:27.584 11:20:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.844 Malloc0 00:05:27.844 11:20:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.105 Malloc1 00:05:28.105 11:20:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.105 11:20:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.367 /dev/nbd0 00:05:28.367 11:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.367 11:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.367 1+0 records in 00:05:28.367 1+0 records out 00:05:28.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477435 s, 8.6 MB/s 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.367 11:20:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.367 11:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.367 11:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.367 11:20:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.629 /dev/nbd1 00:05:28.629 11:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.629 11:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.629 1+0 records in 00:05:28.629 1+0 records out 00:05:28.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219659 s, 18.6 MB/s 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.629 11:20:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.629 11:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.629 11:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.629 11:20:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.629 11:20:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.629 11:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.890 { 00:05:28.890 "nbd_device": "/dev/nbd0", 00:05:28.890 "bdev_name": "Malloc0" 00:05:28.890 }, 00:05:28.890 { 00:05:28.890 "nbd_device": "/dev/nbd1", 00:05:28.890 "bdev_name": "Malloc1" 00:05:28.890 } 00:05:28.890 ]' 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.890 { 00:05:28.890 "nbd_device": "/dev/nbd0", 00:05:28.890 "bdev_name": "Malloc0" 00:05:28.890 }, 00:05:28.890 { 00:05:28.890 "nbd_device": "/dev/nbd1", 00:05:28.890 "bdev_name": "Malloc1" 00:05:28.890 } 00:05:28.890 ]' 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.890 /dev/nbd1' 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.890 /dev/nbd1' 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.890 11:20:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.891 11:20:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.891 256+0 records in 00:05:28.891 256+0 records out 00:05:28.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00609815 s, 172 MB/s 00:05:28.891 11:20:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.891 11:20:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.891 256+0 records in 00:05:28.891 256+0 records out 00:05:28.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128362 s, 81.7 MB/s 00:05:28.891 11:20:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.891 11:20:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.891 256+0 records in 00:05:28.891 256+0 records out 00:05:28.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172941 s, 60.6 MB/s 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.891 11:20:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.152 11:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.152 11:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.152 11:20:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.152 11:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.152 11:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.152 11:20:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.152 11:20:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.152 11:20:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.152 11:20:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.152 11:20:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.413 11:20:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.413 11:20:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.986 11:20:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.558 [2024-10-27 11:20:15.576082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.558 [2024-10-27 11:20:15.642919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.558 [2024-10-27 11:20:15.643007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.558 [2024-10-27 11:20:15.741286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.558 [2024-10-27 11:20:15.741340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.108 11:20:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58354 /var/tmp/spdk-nbd.sock 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58354 ']' 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:33.108 11:20:18 event.app_repeat -- event/event.sh@39 -- # killprocess 58354 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58354 ']' 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58354 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58354 00:05:33.108 killing process with pid 58354 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58354' 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58354 00:05:33.108 11:20:18 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58354 00:05:33.676 spdk_app_start is called in Round 0. 00:05:33.676 Shutdown signal received, stop current app iteration 00:05:33.676 Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 reinitialization... 00:05:33.676 spdk_app_start is called in Round 1. 00:05:33.676 Shutdown signal received, stop current app iteration 00:05:33.676 Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 reinitialization... 00:05:33.676 spdk_app_start is called in Round 2. 00:05:33.676 Shutdown signal received, stop current app iteration 00:05:33.676 Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 reinitialization... 00:05:33.676 spdk_app_start is called in Round 3. 00:05:33.676 Shutdown signal received, stop current app iteration 00:05:33.676 11:20:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:33.676 ************************************ 00:05:33.676 END TEST app_repeat 00:05:33.676 ************************************ 00:05:33.676 11:20:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:33.676 00:05:33.676 real 0m17.548s 00:05:33.676 user 0m38.406s 00:05:33.676 sys 0m2.017s 00:05:33.676 11:20:18 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.676 11:20:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.676 11:20:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:33.676 11:20:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.676 11:20:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.676 11:20:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.676 11:20:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.676 ************************************ 00:05:33.676 START TEST cpu_locks 00:05:33.676 ************************************ 00:05:33.676 11:20:18 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.676 * Looking for test storage... 00:05:33.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:33.676 11:20:18 event.cpu_locks -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:33.676 11:20:18 event.cpu_locks -- common/autotest_common.sh@1689 -- # lcov --version 00:05:33.676 11:20:18 event.cpu_locks -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:33.935 11:20:18 event.cpu_locks -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.935 11:20:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:33.935 11:20:18 event.cpu_locks -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.935 11:20:18 event.cpu_locks -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:33.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.935 --rc genhtml_branch_coverage=1 00:05:33.935 --rc genhtml_function_coverage=1 00:05:33.935 --rc genhtml_legend=1 00:05:33.935 --rc geninfo_all_blocks=1 00:05:33.935 --rc geninfo_unexecuted_blocks=1 00:05:33.935 00:05:33.935 ' 00:05:33.935 11:20:18 event.cpu_locks -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:33.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.935 --rc genhtml_branch_coverage=1 00:05:33.935 --rc genhtml_function_coverage=1 00:05:33.935 --rc genhtml_legend=1 00:05:33.935 --rc geninfo_all_blocks=1 00:05:33.935 --rc geninfo_unexecuted_blocks=1 00:05:33.935 00:05:33.935 ' 00:05:33.935 11:20:18 event.cpu_locks -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:33.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.935 --rc genhtml_branch_coverage=1 00:05:33.935 --rc genhtml_function_coverage=1 00:05:33.935 --rc genhtml_legend=1 00:05:33.935 --rc geninfo_all_blocks=1 00:05:33.935 --rc geninfo_unexecuted_blocks=1 00:05:33.935 00:05:33.935 ' 00:05:33.935 11:20:18 event.cpu_locks -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:33.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.935 --rc genhtml_branch_coverage=1 00:05:33.935 --rc genhtml_function_coverage=1 00:05:33.935 --rc genhtml_legend=1 00:05:33.935 --rc geninfo_all_blocks=1 00:05:33.935 --rc geninfo_unexecuted_blocks=1 00:05:33.935 00:05:33.935 ' 00:05:33.935 11:20:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:33.935 11:20:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:33.935 11:20:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:33.935 11:20:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:33.935 11:20:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.935 11:20:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.935 11:20:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.935 ************************************ 00:05:33.935 START TEST default_locks 00:05:33.935 ************************************ 00:05:33.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.935 11:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:33.935 11:20:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58779 00:05:33.935 11:20:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58779 00:05:33.935 11:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58779 ']' 00:05:33.935 11:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.935 11:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.935 11:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.935 11:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.935 11:20:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.935 11:20:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.935 [2024-10-27 11:20:19.048133] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:33.935 [2024-10-27 11:20:19.048223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58779 ] 00:05:33.935 [2024-10-27 11:20:19.197884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.194 [2024-10-27 11:20:19.274174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.765 11:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.765 11:20:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:34.765 11:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58779 00:05:34.765 11:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58779 00:05:34.765 11:20:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.765 11:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58779 00:05:34.765 11:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58779 ']' 00:05:34.766 11:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58779 00:05:34.766 11:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:34.766 11:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.027 11:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58779 00:05:35.027 killing process with pid 58779 00:05:35.027 11:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.027 11:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.027 11:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58779' 00:05:35.027 11:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58779 00:05:35.027 11:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58779 00:05:35.966 11:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58779 00:05:35.966 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:35.966 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58779 00:05:35.966 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:35.966 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.966 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:35.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.966 ERROR: process (pid: 58779) is no longer running 00:05:35.966 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.966 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58779 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58779 ']' 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.967 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58779) - No such process 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:35.967 00:05:35.967 real 0m2.258s 00:05:35.967 user 0m2.233s 00:05:35.967 sys 0m0.421s 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.967 ************************************ 00:05:35.967 END TEST default_locks 00:05:35.967 11:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.967 ************************************ 00:05:36.225 11:20:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:36.225 11:20:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.225 11:20:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.225 11:20:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.225 ************************************ 00:05:36.225 START TEST default_locks_via_rpc 00:05:36.225 ************************************ 00:05:36.225 11:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:36.225 11:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58836 00:05:36.225 11:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58836 00:05:36.225 11:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58836 ']' 00:05:36.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.225 11:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.225 11:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.225 11:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.225 11:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.225 11:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.225 11:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.225 [2024-10-27 11:20:21.361735] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:36.225 [2024-10-27 11:20:21.361841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58836 ] 00:05:36.483 [2024-10-27 11:20:21.513687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.483 [2024-10-27 11:20:21.588627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.051 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.051 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:37.051 11:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58836 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58836 00:05:37.052 11:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.357 11:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58836 00:05:37.357 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58836 ']' 00:05:37.357 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58836 00:05:37.357 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:37.357 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.357 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58836 00:05:37.357 killing process with pid 58836 00:05:37.357 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.357 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.357 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58836' 00:05:37.357 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58836 00:05:37.357 11:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58836 00:05:38.320 00:05:38.320 real 0m2.289s 00:05:38.320 user 0m2.298s 00:05:38.320 sys 0m0.416s 00:05:38.320 ************************************ 00:05:38.320 END TEST default_locks_via_rpc 00:05:38.320 ************************************ 00:05:38.320 11:20:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.320 11:20:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.580 11:20:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:38.580 11:20:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.580 11:20:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.580 11:20:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.580 ************************************ 00:05:38.580 START TEST non_locking_app_on_locked_coremask 00:05:38.580 ************************************ 00:05:38.580 11:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:38.580 11:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58895 00:05:38.580 11:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58895 /var/tmp/spdk.sock 00:05:38.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.580 11:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58895 ']' 00:05:38.580 11:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.580 11:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.580 11:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.580 11:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.580 11:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.580 11:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.580 [2024-10-27 11:20:23.706945] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:38.580 [2024-10-27 11:20:23.707060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58895 ] 00:05:38.580 [2024-10-27 11:20:23.852638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.839 [2024-10-27 11:20:23.925944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.411 11:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.411 11:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:39.411 11:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58911 00:05:39.411 11:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58911 /var/tmp/spdk2.sock 00:05:39.411 11:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58911 ']' 00:05:39.411 11:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.411 11:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:39.411 11:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.412 11:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.412 11:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.412 11:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.412 [2024-10-27 11:20:24.568503] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:39.412 [2024-10-27 11:20:24.568628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58911 ] 00:05:39.671 [2024-10-27 11:20:24.732272] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.671 [2024-10-27 11:20:24.732329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.671 [2024-10-27 11:20:24.885780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.612 11:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.612 11:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:40.612 11:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58895 00:05:40.612 11:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58895 00:05:40.612 11:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.872 11:20:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58895 00:05:40.872 11:20:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58895 ']' 00:05:40.872 11:20:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58895 00:05:40.872 11:20:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:40.872 11:20:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.872 11:20:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58895 00:05:40.872 11:20:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.872 killing process with pid 58895 00:05:40.872 11:20:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.872 11:20:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58895' 00:05:40.872 11:20:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58895 00:05:40.872 11:20:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58895 00:05:43.406 11:20:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58911 00:05:43.406 11:20:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58911 ']' 00:05:43.406 11:20:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58911 00:05:43.406 11:20:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.406 11:20:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.406 11:20:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58911 00:05:43.406 killing process with pid 58911 00:05:43.406 11:20:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.406 11:20:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.407 11:20:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58911' 00:05:43.407 11:20:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58911 00:05:43.407 11:20:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58911 00:05:44.342 ************************************ 00:05:44.342 END TEST non_locking_app_on_locked_coremask 00:05:44.342 ************************************ 00:05:44.342 00:05:44.342 real 0m5.969s 00:05:44.342 user 0m6.244s 00:05:44.342 sys 0m0.755s 00:05:44.342 11:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.342 11:20:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.600 11:20:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:44.600 11:20:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.600 11:20:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.600 11:20:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.600 ************************************ 00:05:44.600 START TEST locking_app_on_unlocked_coremask 00:05:44.600 ************************************ 00:05:44.600 11:20:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:44.600 11:20:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59002 00:05:44.600 11:20:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59002 /var/tmp/spdk.sock 00:05:44.600 11:20:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59002 ']' 00:05:44.600 11:20:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.600 11:20:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.600 11:20:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.600 11:20:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.600 11:20:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:44.600 11:20:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.600 [2024-10-27 11:20:29.727913] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:44.600 [2024-10-27 11:20:29.728033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59002 ] 00:05:44.857 [2024-10-27 11:20:29.882388] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.857 [2024-10-27 11:20:29.882423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.857 [2024-10-27 11:20:29.957954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.425 11:20:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.425 11:20:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:45.425 11:20:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:45.425 11:20:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59015 00:05:45.425 11:20:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59015 /var/tmp/spdk2.sock 00:05:45.425 11:20:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59015 ']' 00:05:45.425 11:20:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.425 11:20:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.425 11:20:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.425 11:20:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.425 11:20:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.425 [2024-10-27 11:20:30.617975] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:45.425 [2024-10-27 11:20:30.618088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59015 ] 00:05:45.686 [2024-10-27 11:20:30.781789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.686 [2024-10-27 11:20:30.933804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.627 11:20:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.627 11:20:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:46.627 11:20:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59015 00:05:46.627 11:20:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59015 00:05:46.627 11:20:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.886 11:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59002 00:05:46.886 11:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59002 ']' 00:05:46.886 11:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59002 00:05:46.886 11:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:46.886 11:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.886 11:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59002 00:05:46.886 11:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.886 killing process with pid 59002 00:05:46.886 11:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.886 11:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59002' 00:05:46.886 11:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59002 00:05:46.886 11:20:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59002 00:05:49.432 11:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59015 00:05:49.432 11:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59015 ']' 00:05:49.432 11:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59015 00:05:49.432 11:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:49.432 11:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.432 11:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59015 00:05:49.432 killing process with pid 59015 00:05:49.432 11:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.432 11:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.432 11:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59015' 00:05:49.432 11:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59015 00:05:49.432 11:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59015 00:05:50.374 ************************************ 00:05:50.374 END TEST locking_app_on_unlocked_coremask 00:05:50.374 ************************************ 00:05:50.374 00:05:50.374 real 0m5.984s 00:05:50.374 user 0m6.292s 00:05:50.374 sys 0m0.745s 00:05:50.374 11:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.374 11:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.635 11:20:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:50.635 11:20:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.635 11:20:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.635 11:20:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.635 ************************************ 00:05:50.635 START TEST locking_app_on_locked_coremask 00:05:50.635 ************************************ 00:05:50.635 11:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:50.635 11:20:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59104 00:05:50.635 11:20:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59104 /var/tmp/spdk.sock 00:05:50.635 11:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59104 ']' 00:05:50.635 11:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.635 11:20:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.635 11:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.635 11:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.635 11:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.635 11:20:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.635 [2024-10-27 11:20:35.773859] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:50.635 [2024-10-27 11:20:35.774140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59104 ] 00:05:50.896 [2024-10-27 11:20:35.930756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.896 [2024-10-27 11:20:36.017073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59114 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59114 /var/tmp/spdk2.sock 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59114 /var/tmp/spdk2.sock 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59114 /var/tmp/spdk2.sock 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59114 ']' 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.466 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.467 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.467 11:20:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.467 [2024-10-27 11:20:36.681968] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:51.467 [2024-10-27 11:20:36.682213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59114 ] 00:05:51.727 [2024-10-27 11:20:36.843322] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59104 has claimed it. 00:05:51.727 [2024-10-27 11:20:36.843364] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:52.300 ERROR: process (pid: 59114) is no longer running 00:05:52.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59114) - No such process 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59104 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59104 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59104 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59104 ']' 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59104 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59104 00:05:52.300 killing process with pid 59104 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59104' 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59104 00:05:52.300 11:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59104 00:05:53.684 00:05:53.684 real 0m2.945s 00:05:53.684 user 0m3.177s 00:05:53.684 sys 0m0.514s 00:05:53.684 11:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.684 11:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.684 ************************************ 00:05:53.684 END TEST locking_app_on_locked_coremask 00:05:53.684 ************************************ 00:05:53.684 11:20:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:53.684 11:20:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.684 11:20:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.684 11:20:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.684 ************************************ 00:05:53.684 START TEST locking_overlapped_coremask 00:05:53.684 ************************************ 00:05:53.684 11:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:53.684 11:20:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59173 00:05:53.684 11:20:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59173 /var/tmp/spdk.sock 00:05:53.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.684 11:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59173 ']' 00:05:53.684 11:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.684 11:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.684 11:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.684 11:20:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:53.684 11:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.684 11:20:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.684 [2024-10-27 11:20:38.774385] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:53.684 [2024-10-27 11:20:38.774515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:05:53.684 [2024-10-27 11:20:38.931954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.944 [2024-10-27 11:20:39.020131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.945 [2024-10-27 11:20:39.020416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.945 [2024-10-27 11:20:39.020431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.641 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.641 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:54.641 11:20:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:54.641 11:20:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59185 00:05:54.641 11:20:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59185 /var/tmp/spdk2.sock 00:05:54.641 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:54.641 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59185 /var/tmp/spdk2.sock 00:05:54.641 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:54.641 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.642 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:54.642 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.642 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59185 /var/tmp/spdk2.sock 00:05:54.642 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59185 ']' 00:05:54.642 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.642 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.642 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.642 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.642 11:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.642 [2024-10-27 11:20:39.672517] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:54.642 [2024-10-27 11:20:39.672783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59185 ] 00:05:54.642 [2024-10-27 11:20:39.845341] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59173 has claimed it. 00:05:54.642 [2024-10-27 11:20:39.845417] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.207 ERROR: process (pid: 59185) is no longer running 00:05:55.207 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59185) - No such process 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59173 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59173 ']' 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59173 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59173 00:05:55.207 killing process with pid 59173 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59173' 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59173 00:05:55.207 11:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59173 00:05:56.587 ************************************ 00:05:56.587 END TEST locking_overlapped_coremask 00:05:56.587 ************************************ 00:05:56.587 00:05:56.587 real 0m2.812s 00:05:56.587 user 0m7.653s 00:05:56.587 sys 0m0.413s 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.587 11:20:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:56.587 11:20:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.587 11:20:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.587 11:20:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.587 ************************************ 00:05:56.587 START TEST locking_overlapped_coremask_via_rpc 00:05:56.587 ************************************ 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:56.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59238 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59238 /var/tmp/spdk.sock 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59238 ']' 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.587 11:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.587 [2024-10-27 11:20:41.634517] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:56.587 [2024-10-27 11:20:41.634778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59238 ] 00:05:56.587 [2024-10-27 11:20:41.789476] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.587 [2024-10-27 11:20:41.789526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.845 [2024-10-27 11:20:41.890120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.845 [2024-10-27 11:20:41.890383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.845 [2024-10-27 11:20:41.890399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.413 11:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.413 11:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:57.413 11:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59256 00:05:57.413 11:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59256 /var/tmp/spdk2.sock 00:05:57.413 11:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59256 ']' 00:05:57.413 11:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:57.413 11:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.413 11:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.413 11:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.413 11:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.413 11:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.413 [2024-10-27 11:20:42.549107] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:05:57.413 [2024-10-27 11:20:42.549415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59256 ] 00:05:57.673 [2024-10-27 11:20:42.721105] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.673 [2024-10-27 11:20:42.721146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.673 [2024-10-27 11:20:42.920770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.673 [2024-10-27 11:20:42.924473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.673 [2024-10-27 11:20:42.924490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.613 [2024-10-27 11:20:43.872403] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59238 has claimed it. 00:05:58.613 request: 00:05:58.613 { 00:05:58.613 "method": "framework_enable_cpumask_locks", 00:05:58.613 "req_id": 1 00:05:58.613 } 00:05:58.613 Got JSON-RPC error response 00:05:58.613 response: 00:05:58.613 { 00:05:58.613 "code": -32603, 00:05:58.613 "message": "Failed to claim CPU core: 2" 00:05:58.613 } 00:05:58.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59238 /var/tmp/spdk.sock 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59238 ']' 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.613 11:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.873 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.873 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.873 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59256 /var/tmp/spdk2.sock 00:05:58.873 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59256 ']' 00:05:58.873 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.873 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.873 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.873 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.873 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.133 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.133 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:59.133 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:59.133 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.133 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.133 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.133 00:05:59.133 real 0m2.744s 00:05:59.133 user 0m1.075s 00:05:59.133 sys 0m0.117s 00:05:59.133 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.133 11:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.133 ************************************ 00:05:59.133 END TEST locking_overlapped_coremask_via_rpc 00:05:59.133 ************************************ 00:05:59.133 11:20:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:59.133 11:20:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59238 ]] 00:05:59.133 11:20:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59238 00:05:59.133 11:20:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59238 ']' 00:05:59.133 11:20:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59238 00:05:59.133 11:20:44 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:59.133 11:20:44 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.133 11:20:44 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59238 00:05:59.133 killing process with pid 59238 00:05:59.133 11:20:44 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.133 11:20:44 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.133 11:20:44 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59238' 00:05:59.133 11:20:44 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59238 00:05:59.133 11:20:44 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59238 00:06:00.517 11:20:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59256 ]] 00:06:00.517 11:20:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59256 00:06:00.517 11:20:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59256 ']' 00:06:00.517 11:20:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59256 00:06:00.517 11:20:45 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:00.517 11:20:45 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.517 11:20:45 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59256 00:06:00.517 killing process with pid 59256 00:06:00.517 11:20:45 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:00.517 11:20:45 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:00.517 11:20:45 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59256' 00:06:00.517 11:20:45 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59256 00:06:00.517 11:20:45 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59256 00:06:01.457 11:20:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.457 Process with pid 59238 is not found 00:06:01.457 Process with pid 59256 is not found 00:06:01.457 11:20:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:01.457 11:20:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59238 ]] 00:06:01.457 11:20:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59238 00:06:01.457 11:20:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59238 ']' 00:06:01.457 11:20:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59238 00:06:01.457 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59238) - No such process 00:06:01.457 11:20:46 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59238 is not found' 00:06:01.457 11:20:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59256 ]] 00:06:01.457 11:20:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59256 00:06:01.457 11:20:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59256 ']' 00:06:01.457 11:20:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59256 00:06:01.457 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59256) - No such process 00:06:01.457 11:20:46 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59256 is not found' 00:06:01.457 11:20:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.457 00:06:01.457 real 0m27.903s 00:06:01.457 user 0m48.118s 00:06:01.457 sys 0m4.160s 00:06:01.457 11:20:46 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.457 11:20:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.457 ************************************ 00:06:01.457 END TEST cpu_locks 00:06:01.457 ************************************ 00:06:01.718 ************************************ 00:06:01.718 END TEST event 00:06:01.718 ************************************ 00:06:01.718 00:06:01.718 real 0m52.995s 00:06:01.718 user 1m39.123s 00:06:01.718 sys 0m6.943s 00:06:01.718 11:20:46 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.718 11:20:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.718 11:20:46 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:01.718 11:20:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.718 11:20:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.718 11:20:46 -- common/autotest_common.sh@10 -- # set +x 00:06:01.718 ************************************ 00:06:01.718 START TEST thread 00:06:01.718 ************************************ 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:01.718 * Looking for test storage... 00:06:01.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1689 -- # lcov --version 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:01.718 11:20:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.718 11:20:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.718 11:20:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.718 11:20:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.718 11:20:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.718 11:20:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.718 11:20:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.718 11:20:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.718 11:20:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.718 11:20:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.718 11:20:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.718 11:20:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:01.718 11:20:46 thread -- scripts/common.sh@345 -- # : 1 00:06:01.718 11:20:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.718 11:20:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.718 11:20:46 thread -- scripts/common.sh@365 -- # decimal 1 00:06:01.718 11:20:46 thread -- scripts/common.sh@353 -- # local d=1 00:06:01.718 11:20:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.718 11:20:46 thread -- scripts/common.sh@355 -- # echo 1 00:06:01.718 11:20:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.718 11:20:46 thread -- scripts/common.sh@366 -- # decimal 2 00:06:01.718 11:20:46 thread -- scripts/common.sh@353 -- # local d=2 00:06:01.718 11:20:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.718 11:20:46 thread -- scripts/common.sh@355 -- # echo 2 00:06:01.718 11:20:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.718 11:20:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.718 11:20:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.718 11:20:46 thread -- scripts/common.sh@368 -- # return 0 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:01.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.718 --rc genhtml_branch_coverage=1 00:06:01.718 --rc genhtml_function_coverage=1 00:06:01.718 --rc genhtml_legend=1 00:06:01.718 --rc geninfo_all_blocks=1 00:06:01.718 --rc geninfo_unexecuted_blocks=1 00:06:01.718 00:06:01.718 ' 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:01.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.718 --rc genhtml_branch_coverage=1 00:06:01.718 --rc genhtml_function_coverage=1 00:06:01.718 --rc genhtml_legend=1 00:06:01.718 --rc geninfo_all_blocks=1 00:06:01.718 --rc geninfo_unexecuted_blocks=1 00:06:01.718 00:06:01.718 ' 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:01.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.718 --rc genhtml_branch_coverage=1 00:06:01.718 --rc genhtml_function_coverage=1 00:06:01.718 --rc genhtml_legend=1 00:06:01.718 --rc geninfo_all_blocks=1 00:06:01.718 --rc geninfo_unexecuted_blocks=1 00:06:01.718 00:06:01.718 ' 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:01.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.718 --rc genhtml_branch_coverage=1 00:06:01.718 --rc genhtml_function_coverage=1 00:06:01.718 --rc genhtml_legend=1 00:06:01.718 --rc geninfo_all_blocks=1 00:06:01.718 --rc geninfo_unexecuted_blocks=1 00:06:01.718 00:06:01.718 ' 00:06:01.718 11:20:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.718 11:20:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.718 ************************************ 00:06:01.718 START TEST thread_poller_perf 00:06:01.718 ************************************ 00:06:01.718 11:20:46 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.718 [2024-10-27 11:20:46.972501] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:01.718 [2024-10-27 11:20:46.972604] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59411 ] 00:06:01.978 [2024-10-27 11:20:47.128840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.978 [2024-10-27 11:20:47.211379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.978 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:03.363 [2024-10-27T11:20:48.644Z] ====================================== 00:06:03.363 [2024-10-27T11:20:48.644Z] busy:2610099610 (cyc) 00:06:03.363 [2024-10-27T11:20:48.644Z] total_run_count: 402000 00:06:03.363 [2024-10-27T11:20:48.644Z] tsc_hz: 2600000000 (cyc) 00:06:03.363 [2024-10-27T11:20:48.644Z] ====================================== 00:06:03.363 [2024-10-27T11:20:48.644Z] poller_cost: 6492 (cyc), 2496 (nsec) 00:06:03.364 00:06:03.364 real 0m1.398s 00:06:03.364 user 0m1.210s 00:06:03.364 sys 0m0.081s 00:06:03.364 11:20:48 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.364 11:20:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.364 ************************************ 00:06:03.364 END TEST thread_poller_perf 00:06:03.364 ************************************ 00:06:03.364 11:20:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.364 11:20:48 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:03.364 11:20:48 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.364 11:20:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.364 ************************************ 00:06:03.364 START TEST thread_poller_perf 00:06:03.364 ************************************ 00:06:03.364 11:20:48 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.364 [2024-10-27 11:20:48.422307] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:03.364 [2024-10-27 11:20:48.422537] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59447 ] 00:06:03.364 [2024-10-27 11:20:48.580799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.624 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:03.624 [2024-10-27 11:20:48.664617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.569 [2024-10-27T11:20:49.850Z] ====================================== 00:06:04.569 [2024-10-27T11:20:49.850Z] busy:2602819938 (cyc) 00:06:04.569 [2024-10-27T11:20:49.850Z] total_run_count: 5302000 00:06:04.569 [2024-10-27T11:20:49.850Z] tsc_hz: 2600000000 (cyc) 00:06:04.569 [2024-10-27T11:20:49.850Z] ====================================== 00:06:04.569 [2024-10-27T11:20:49.850Z] poller_cost: 490 (cyc), 188 (nsec) 00:06:04.569 ************************************ 00:06:04.569 END TEST thread_poller_perf 00:06:04.569 ************************************ 00:06:04.569 00:06:04.569 real 0m1.395s 00:06:04.569 user 0m1.216s 00:06:04.569 sys 0m0.072s 00:06:04.569 11:20:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.569 11:20:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.569 11:20:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:04.569 ************************************ 00:06:04.569 END TEST thread 00:06:04.569 ************************************ 00:06:04.569 00:06:04.569 real 0m3.022s 00:06:04.569 user 0m2.522s 00:06:04.569 sys 0m0.275s 00:06:04.569 11:20:49 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.569 11:20:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.828 11:20:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:04.828 11:20:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:04.828 11:20:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.828 11:20:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.828 11:20:49 -- common/autotest_common.sh@10 -- # set +x 00:06:04.828 ************************************ 00:06:04.828 START TEST app_cmdline 00:06:04.828 ************************************ 00:06:04.828 11:20:49 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:04.828 * Looking for test storage... 00:06:04.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:04.828 11:20:49 app_cmdline -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:04.828 11:20:49 app_cmdline -- common/autotest_common.sh@1689 -- # lcov --version 00:06:04.828 11:20:49 app_cmdline -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:04.828 11:20:49 app_cmdline -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:04.828 11:20:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:04.828 11:20:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.828 11:20:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:04.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.828 11:20:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.828 11:20:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.828 11:20:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.828 11:20:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:04.828 11:20:50 app_cmdline -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.828 11:20:50 app_cmdline -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.828 --rc genhtml_branch_coverage=1 00:06:04.828 --rc genhtml_function_coverage=1 00:06:04.828 --rc genhtml_legend=1 00:06:04.828 --rc geninfo_all_blocks=1 00:06:04.828 --rc geninfo_unexecuted_blocks=1 00:06:04.828 00:06:04.828 ' 00:06:04.828 11:20:50 app_cmdline -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.828 --rc genhtml_branch_coverage=1 00:06:04.828 --rc genhtml_function_coverage=1 00:06:04.828 --rc genhtml_legend=1 00:06:04.828 --rc geninfo_all_blocks=1 00:06:04.828 --rc geninfo_unexecuted_blocks=1 00:06:04.828 00:06:04.828 ' 00:06:04.828 11:20:50 app_cmdline -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.828 --rc genhtml_branch_coverage=1 00:06:04.828 --rc genhtml_function_coverage=1 00:06:04.828 --rc genhtml_legend=1 00:06:04.828 --rc geninfo_all_blocks=1 00:06:04.828 --rc geninfo_unexecuted_blocks=1 00:06:04.828 00:06:04.828 ' 00:06:04.828 11:20:50 app_cmdline -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.828 --rc genhtml_branch_coverage=1 00:06:04.828 --rc genhtml_function_coverage=1 00:06:04.828 --rc genhtml_legend=1 00:06:04.828 --rc geninfo_all_blocks=1 00:06:04.828 --rc geninfo_unexecuted_blocks=1 00:06:04.828 00:06:04.828 ' 00:06:04.828 11:20:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:04.828 11:20:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59531 00:06:04.828 11:20:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59531 00:06:04.828 11:20:50 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59531 ']' 00:06:04.828 11:20:50 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.828 11:20:50 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:04.828 11:20:50 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.828 11:20:50 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.828 11:20:50 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.828 11:20:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.828 [2024-10-27 11:20:50.079438] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:04.828 [2024-10-27 11:20:50.079735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59531 ] 00:06:05.087 [2024-10-27 11:20:50.236774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.087 [2024-10-27 11:20:50.312634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.654 11:20:50 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.654 11:20:50 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:05.654 11:20:50 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:05.912 { 00:06:05.912 "version": "SPDK v25.01-pre git sha1 169c3cd04", 00:06:05.912 "fields": { 00:06:05.912 "major": 25, 00:06:05.912 "minor": 1, 00:06:05.912 "patch": 0, 00:06:05.912 "suffix": "-pre", 00:06:05.912 "commit": "169c3cd04" 00:06:05.912 } 00:06:05.912 } 00:06:05.912 11:20:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:05.912 11:20:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:05.912 11:20:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:05.912 11:20:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:05.912 11:20:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:05.912 11:20:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.912 11:20:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.912 11:20:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:05.912 11:20:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:05.912 11:20:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:05.912 11:20:51 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.171 request: 00:06:06.171 { 00:06:06.171 "method": "env_dpdk_get_mem_stats", 00:06:06.171 "req_id": 1 00:06:06.171 } 00:06:06.171 Got JSON-RPC error response 00:06:06.171 response: 00:06:06.171 { 00:06:06.171 "code": -32601, 00:06:06.171 "message": "Method not found" 00:06:06.171 } 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.171 11:20:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59531 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59531 ']' 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59531 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59531 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.171 killing process with pid 59531 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59531' 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@969 -- # kill 59531 00:06:06.171 11:20:51 app_cmdline -- common/autotest_common.sh@974 -- # wait 59531 00:06:07.553 00:06:07.553 real 0m2.575s 00:06:07.553 user 0m2.855s 00:06:07.553 sys 0m0.396s 00:06:07.553 11:20:52 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.553 ************************************ 00:06:07.553 END TEST app_cmdline 00:06:07.553 ************************************ 00:06:07.553 11:20:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.553 11:20:52 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.553 11:20:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.553 11:20:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.553 11:20:52 -- common/autotest_common.sh@10 -- # set +x 00:06:07.553 ************************************ 00:06:07.553 START TEST version 00:06:07.553 ************************************ 00:06:07.553 11:20:52 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.554 * Looking for test storage... 00:06:07.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:07.554 11:20:52 version -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:07.554 11:20:52 version -- common/autotest_common.sh@1689 -- # lcov --version 00:06:07.554 11:20:52 version -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:07.554 11:20:52 version -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:07.554 11:20:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.554 11:20:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.554 11:20:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.554 11:20:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.554 11:20:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.554 11:20:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.554 11:20:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.554 11:20:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.554 11:20:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.554 11:20:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.554 11:20:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.554 11:20:52 version -- scripts/common.sh@344 -- # case "$op" in 00:06:07.554 11:20:52 version -- scripts/common.sh@345 -- # : 1 00:06:07.554 11:20:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.554 11:20:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.554 11:20:52 version -- scripts/common.sh@365 -- # decimal 1 00:06:07.554 11:20:52 version -- scripts/common.sh@353 -- # local d=1 00:06:07.554 11:20:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.554 11:20:52 version -- scripts/common.sh@355 -- # echo 1 00:06:07.554 11:20:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.554 11:20:52 version -- scripts/common.sh@366 -- # decimal 2 00:06:07.554 11:20:52 version -- scripts/common.sh@353 -- # local d=2 00:06:07.554 11:20:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.554 11:20:52 version -- scripts/common.sh@355 -- # echo 2 00:06:07.554 11:20:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.554 11:20:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.554 11:20:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.554 11:20:52 version -- scripts/common.sh@368 -- # return 0 00:06:07.554 11:20:52 version -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.554 11:20:52 version -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:07.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.554 --rc genhtml_branch_coverage=1 00:06:07.554 --rc genhtml_function_coverage=1 00:06:07.554 --rc genhtml_legend=1 00:06:07.554 --rc geninfo_all_blocks=1 00:06:07.554 --rc geninfo_unexecuted_blocks=1 00:06:07.554 00:06:07.554 ' 00:06:07.554 11:20:52 version -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:07.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.554 --rc genhtml_branch_coverage=1 00:06:07.554 --rc genhtml_function_coverage=1 00:06:07.554 --rc genhtml_legend=1 00:06:07.554 --rc geninfo_all_blocks=1 00:06:07.554 --rc geninfo_unexecuted_blocks=1 00:06:07.554 00:06:07.554 ' 00:06:07.554 11:20:52 version -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:07.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.554 --rc genhtml_branch_coverage=1 00:06:07.554 --rc genhtml_function_coverage=1 00:06:07.554 --rc genhtml_legend=1 00:06:07.554 --rc geninfo_all_blocks=1 00:06:07.554 --rc geninfo_unexecuted_blocks=1 00:06:07.554 00:06:07.554 ' 00:06:07.554 11:20:52 version -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:07.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.554 --rc genhtml_branch_coverage=1 00:06:07.554 --rc genhtml_function_coverage=1 00:06:07.554 --rc genhtml_legend=1 00:06:07.554 --rc geninfo_all_blocks=1 00:06:07.554 --rc geninfo_unexecuted_blocks=1 00:06:07.554 00:06:07.554 ' 00:06:07.554 11:20:52 version -- app/version.sh@17 -- # get_header_version major 00:06:07.554 11:20:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.554 11:20:52 version -- app/version.sh@14 -- # cut -f2 00:06:07.554 11:20:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.554 11:20:52 version -- app/version.sh@17 -- # major=25 00:06:07.554 11:20:52 version -- app/version.sh@18 -- # get_header_version minor 00:06:07.554 11:20:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.554 11:20:52 version -- app/version.sh@14 -- # cut -f2 00:06:07.554 11:20:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.554 11:20:52 version -- app/version.sh@18 -- # minor=1 00:06:07.554 11:20:52 version -- app/version.sh@19 -- # get_header_version patch 00:06:07.554 11:20:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.554 11:20:52 version -- app/version.sh@14 -- # cut -f2 00:06:07.554 11:20:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.554 11:20:52 version -- app/version.sh@19 -- # patch=0 00:06:07.554 11:20:52 version -- app/version.sh@20 -- # get_header_version suffix 00:06:07.554 11:20:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.554 11:20:52 version -- app/version.sh@14 -- # cut -f2 00:06:07.554 11:20:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.554 11:20:52 version -- app/version.sh@20 -- # suffix=-pre 00:06:07.554 11:20:52 version -- app/version.sh@22 -- # version=25.1 00:06:07.554 11:20:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:07.554 11:20:52 version -- app/version.sh@28 -- # version=25.1rc0 00:06:07.554 11:20:52 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:07.554 11:20:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:07.554 11:20:52 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:07.554 11:20:52 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:07.554 00:06:07.554 real 0m0.184s 00:06:07.554 user 0m0.122s 00:06:07.554 sys 0m0.091s 00:06:07.554 11:20:52 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.554 11:20:52 version -- common/autotest_common.sh@10 -- # set +x 00:06:07.554 ************************************ 00:06:07.554 END TEST version 00:06:07.554 ************************************ 00:06:07.554 11:20:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:07.554 11:20:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:07.554 11:20:52 -- spdk/autotest.sh@194 -- # uname -s 00:06:07.554 11:20:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:07.554 11:20:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:07.554 11:20:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:07.554 11:20:52 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:07.554 11:20:52 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:07.554 11:20:52 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:07.554 11:20:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.554 11:20:52 -- common/autotest_common.sh@10 -- # set +x 00:06:07.554 ************************************ 00:06:07.554 START TEST blockdev_nvme 00:06:07.554 ************************************ 00:06:07.554 11:20:52 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:07.554 * Looking for test storage... 00:06:07.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:07.554 11:20:52 blockdev_nvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:07.554 11:20:52 blockdev_nvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:07.554 11:20:52 blockdev_nvme -- common/autotest_common.sh@1689 -- # lcov --version 00:06:07.816 11:20:52 blockdev_nvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.816 11:20:52 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:07.816 11:20:52 blockdev_nvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.816 11:20:52 blockdev_nvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:07.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.816 --rc genhtml_branch_coverage=1 00:06:07.816 --rc genhtml_function_coverage=1 00:06:07.816 --rc genhtml_legend=1 00:06:07.816 --rc geninfo_all_blocks=1 00:06:07.816 --rc geninfo_unexecuted_blocks=1 00:06:07.816 00:06:07.816 ' 00:06:07.816 11:20:52 blockdev_nvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:07.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.816 --rc genhtml_branch_coverage=1 00:06:07.816 --rc genhtml_function_coverage=1 00:06:07.816 --rc genhtml_legend=1 00:06:07.816 --rc geninfo_all_blocks=1 00:06:07.816 --rc geninfo_unexecuted_blocks=1 00:06:07.816 00:06:07.816 ' 00:06:07.816 11:20:52 blockdev_nvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:07.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.816 --rc genhtml_branch_coverage=1 00:06:07.816 --rc genhtml_function_coverage=1 00:06:07.816 --rc genhtml_legend=1 00:06:07.816 --rc geninfo_all_blocks=1 00:06:07.816 --rc geninfo_unexecuted_blocks=1 00:06:07.816 00:06:07.816 ' 00:06:07.816 11:20:52 blockdev_nvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:07.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.816 --rc genhtml_branch_coverage=1 00:06:07.816 --rc genhtml_function_coverage=1 00:06:07.816 --rc genhtml_legend=1 00:06:07.816 --rc geninfo_all_blocks=1 00:06:07.816 --rc geninfo_unexecuted_blocks=1 00:06:07.816 00:06:07.816 ' 00:06:07.816 11:20:52 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:07.816 11:20:52 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:07.816 11:20:52 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59697 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59697 00:06:07.817 11:20:52 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 59697 ']' 00:06:07.817 11:20:52 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.817 11:20:52 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.817 11:20:52 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.817 11:20:52 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.817 11:20:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:07.817 11:20:52 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:07.817 [2024-10-27 11:20:52.995795] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:07.817 [2024-10-27 11:20:52.995948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59697 ] 00:06:08.078 [2024-10-27 11:20:53.160674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.078 [2024-10-27 11:20:53.278750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.016 11:20:53 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.016 11:20:53 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:09.016 11:20:53 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:09.016 11:20:53 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:09.016 11:20:53 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:09.016 11:20:53 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:09.016 11:20:53 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:09.016 11:20:54 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:09.016 11:20:54 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.016 11:20:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "eadfba1e-fbfd-4042-996f-aacb719f1e47"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "eadfba1e-fbfd-4042-996f-aacb719f1e47",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "3cf73188-0a0a-4dce-8f98-4dd2c9f7a4fa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3cf73188-0a0a-4dce-8f98-4dd2c9f7a4fa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1a4abeda-a52c-4a15-baed-8fde6956bad5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1a4abeda-a52c-4a15-baed-8fde6956bad5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2bed88be-bfd7-4f23-8fc5-dc5b4ff9644a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2bed88be-bfd7-4f23-8fc5-dc5b4ff9644a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a717b86a-dce6-44d8-89d6-bae362622009"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a717b86a-dce6-44d8-89d6-bae362622009",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "68a49467-6a35-4b61-89d2-018cb67dcf8c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "68a49467-6a35-4b61-89d2-018cb67dcf8c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:09.278 11:20:54 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59697 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 59697 ']' 00:06:09.278 11:20:54 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 59697 00:06:09.279 11:20:54 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:09.279 11:20:54 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.279 11:20:54 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59697 00:06:09.279 11:20:54 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.279 11:20:54 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.279 11:20:54 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59697' 00:06:09.279 killing process with pid 59697 00:06:09.279 11:20:54 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 59697 00:06:09.279 11:20:54 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 59697 00:06:10.728 11:20:55 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:10.728 11:20:55 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:10.729 11:20:55 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:10.729 11:20:55 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.729 11:20:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.729 ************************************ 00:06:10.729 START TEST bdev_hello_world 00:06:10.729 ************************************ 00:06:10.729 11:20:55 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:10.729 [2024-10-27 11:20:55.802673] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:10.729 [2024-10-27 11:20:55.802794] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59781 ] 00:06:10.729 [2024-10-27 11:20:55.957884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.987 [2024-10-27 11:20:56.040605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.558 [2024-10-27 11:20:56.531201] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:11.558 [2024-10-27 11:20:56.531264] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:11.558 [2024-10-27 11:20:56.531287] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:11.558 [2024-10-27 11:20:56.534027] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:11.558 [2024-10-27 11:20:56.535066] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:11.558 [2024-10-27 11:20:56.535102] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:11.558 [2024-10-27 11:20:56.535758] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:11.558 00:06:11.558 [2024-10-27 11:20:56.535791] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:12.128 00:06:12.128 real 0m1.508s 00:06:12.128 user 0m1.242s 00:06:12.128 sys 0m0.158s 00:06:12.128 ************************************ 00:06:12.128 END TEST bdev_hello_world 00:06:12.128 ************************************ 00:06:12.128 11:20:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.128 11:20:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:12.128 11:20:57 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:12.128 11:20:57 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:12.128 11:20:57 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.128 11:20:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.128 ************************************ 00:06:12.128 START TEST bdev_bounds 00:06:12.128 ************************************ 00:06:12.128 11:20:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:12.128 11:20:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59818 00:06:12.128 Process bdevio pid: 59818 00:06:12.128 11:20:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.128 11:20:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59818' 00:06:12.128 11:20:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59818 00:06:12.128 11:20:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 59818 ']' 00:06:12.129 11:20:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.129 11:20:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.129 11:20:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.129 11:20:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.129 11:20:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:12.129 11:20:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:12.129 [2024-10-27 11:20:57.366483] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:12.129 [2024-10-27 11:20:57.366618] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59818 ] 00:06:12.389 [2024-10-27 11:20:57.525373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.389 [2024-10-27 11:20:57.612524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.389 [2024-10-27 11:20:57.612830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.389 [2024-10-27 11:20:57.612854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.962 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.962 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:12.962 11:20:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:13.224 I/O targets: 00:06:13.224 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:13.224 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:13.224 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:13.224 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:13.224 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:13.224 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:13.224 00:06:13.224 00:06:13.224 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.224 http://cunit.sourceforge.net/ 00:06:13.224 00:06:13.224 00:06:13.224 Suite: bdevio tests on: Nvme3n1 00:06:13.224 Test: blockdev write read block ...passed 00:06:13.224 Test: blockdev write zeroes read block ...passed 00:06:13.224 Test: blockdev write zeroes read no split ...passed 00:06:13.224 Test: blockdev write zeroes read split ...passed 00:06:13.224 Test: blockdev write zeroes read split partial ...passed 00:06:13.224 Test: blockdev reset ...[2024-10-27 11:20:58.365890] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:13.224 [2024-10-27 11:20:58.371048] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:13.224 passed 00:06:13.224 Test: blockdev write read 8 blocks ...passed 00:06:13.224 Test: blockdev write read size > 128k ...passed 00:06:13.224 Test: blockdev write read invalid size ...passed 00:06:13.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.224 Test: blockdev write read max offset ...passed 00:06:13.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.224 Test: blockdev writev readv 8 blocks ...passed 00:06:13.224 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.224 Test: blockdev writev readv block ...passed 00:06:13.224 Test: blockdev writev readv size > 128k ...passed 00:06:13.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.224 Test: blockdev comparev and writev ...[2024-10-27 11:20:58.389847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b200a000 len:0x1000 00:06:13.224 [2024-10-27 11:20:58.389917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:13.224 passed 00:06:13.224 Test: blockdev nvme passthru rw ...passed 00:06:13.224 Test: blockdev nvme passthru vendor specific ...[2024-10-27 11:20:58.392436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:13.224 [2024-10-27 11:20:58.392486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:13.224 passed 00:06:13.224 Test: blockdev nvme admin passthru ...passed 00:06:13.224 Test: blockdev copy ...passed 00:06:13.224 Suite: bdevio tests on: Nvme2n3 00:06:13.224 Test: blockdev write read block ...passed 00:06:13.224 Test: blockdev write zeroes read block ...passed 00:06:13.224 Test: blockdev write zeroes read no split ...passed 00:06:13.224 Test: blockdev write zeroes read split ...passed 00:06:13.224 Test: blockdev write zeroes read split partial ...passed 00:06:13.224 Test: blockdev reset ...[2024-10-27 11:20:58.454021] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:13.224 [2024-10-27 11:20:58.457883] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:13.224 passed 00:06:13.224 Test: blockdev write read 8 blocks ...passed 00:06:13.224 Test: blockdev write read size > 128k ...passed 00:06:13.224 Test: blockdev write read invalid size ...passed 00:06:13.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.224 Test: blockdev write read max offset ...passed 00:06:13.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.224 Test: blockdev writev readv 8 blocks ...passed 00:06:13.224 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.224 Test: blockdev writev readv block ...passed 00:06:13.224 Test: blockdev writev readv size > 128k ...passed 00:06:13.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.224 Test: blockdev comparev and writev ...[2024-10-27 11:20:58.475395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295a06000 len:0x1000 00:06:13.224 [2024-10-27 11:20:58.475472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:13.224 passed 00:06:13.224 Test: blockdev nvme passthru rw ...passed 00:06:13.224 Test: blockdev nvme passthru vendor specific ...[2024-10-27 11:20:58.478123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:13.224 [2024-10-27 11:20:58.478166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:13.224 passed 00:06:13.224 Test: blockdev nvme admin passthru ...passed 00:06:13.224 Test: blockdev copy ...passed 00:06:13.224 Suite: bdevio tests on: Nvme2n2 00:06:13.224 Test: blockdev write read block ...passed 00:06:13.224 Test: blockdev write zeroes read block ...passed 00:06:13.224 Test: blockdev write zeroes read no split ...passed 00:06:13.486 Test: blockdev write zeroes read split ...passed 00:06:13.486 Test: blockdev write zeroes read split partial ...passed 00:06:13.486 Test: blockdev reset ...[2024-10-27 11:20:58.542457] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:13.486 [2024-10-27 11:20:58.546110] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:13.486 passed 00:06:13.486 Test: blockdev write read 8 blocks ...passed 00:06:13.486 Test: blockdev write read size > 128k ...passed 00:06:13.486 Test: blockdev write read invalid size ...passed 00:06:13.486 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.486 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.486 Test: blockdev write read max offset ...passed 00:06:13.486 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.486 Test: blockdev writev readv 8 blocks ...passed 00:06:13.486 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.486 Test: blockdev writev readv block ...passed 00:06:13.486 Test: blockdev writev readv size > 128k ...passed 00:06:13.486 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.486 Test: blockdev comparev and writev ...[2024-10-27 11:20:58.554820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd83c000 len:0x1000 00:06:13.486 [2024-10-27 11:20:58.554880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:13.486 passed 00:06:13.486 Test: blockdev nvme passthru rw ...passed 00:06:13.486 Test: blockdev nvme passthru vendor specific ...passed 00:06:13.486 Test: blockdev nvme admin passthru ...[2024-10-27 11:20:58.555970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:13.486 [2024-10-27 11:20:58.556010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:13.486 passed 00:06:13.486 Test: blockdev copy ...passed 00:06:13.486 Suite: bdevio tests on: Nvme2n1 00:06:13.486 Test: blockdev write read block ...passed 00:06:13.486 Test: blockdev write zeroes read block ...passed 00:06:13.486 Test: blockdev write zeroes read no split ...passed 00:06:13.486 Test: blockdev write zeroes read split ...passed 00:06:13.486 Test: blockdev write zeroes read split partial ...passed 00:06:13.486 Test: blockdev reset ...[2024-10-27 11:20:58.620044] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:13.486 [2024-10-27 11:20:58.624433] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:13.486 passed 00:06:13.486 Test: blockdev write read 8 blocks ...passed 00:06:13.486 Test: blockdev write read size > 128k ...passed 00:06:13.486 Test: blockdev write read invalid size ...passed 00:06:13.486 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.486 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.486 Test: blockdev write read max offset ...passed 00:06:13.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.487 Test: blockdev writev readv 8 blocks ...passed 00:06:13.487 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.487 Test: blockdev writev readv block ...passed 00:06:13.487 Test: blockdev writev readv size > 128k ...passed 00:06:13.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.487 Test: blockdev comparev and writev ...[2024-10-27 11:20:58.641929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd838000 len:0x1000 00:06:13.487 [2024-10-27 11:20:58.641994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:13.487 passed 00:06:13.487 Test: blockdev nvme passthru rw ...passed 00:06:13.487 Test: blockdev nvme passthru vendor specific ...[2024-10-27 11:20:58.644466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:13.487 [2024-10-27 11:20:58.644509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:13.487 passed 00:06:13.487 Test: blockdev nvme admin passthru ...passed 00:06:13.487 Test: blockdev copy ...passed 00:06:13.487 Suite: bdevio tests on: Nvme1n1 00:06:13.487 Test: blockdev write read block ...passed 00:06:13.487 Test: blockdev write zeroes read block ...passed 00:06:13.487 Test: blockdev write zeroes read no split ...passed 00:06:13.487 Test: blockdev write zeroes read split ...passed 00:06:13.487 Test: blockdev write zeroes read split partial ...passed 00:06:13.487 Test: blockdev reset ...[2024-10-27 11:20:58.708205] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:13.487 [2024-10-27 11:20:58.712086] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:13.487 passed 00:06:13.487 Test: blockdev write read 8 blocks ...passed 00:06:13.487 Test: blockdev write read size > 128k ...passed 00:06:13.487 Test: blockdev write read invalid size ...passed 00:06:13.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.487 Test: blockdev write read max offset ...passed 00:06:13.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.487 Test: blockdev writev readv 8 blocks ...passed 00:06:13.487 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.487 Test: blockdev writev readv block ...passed 00:06:13.487 Test: blockdev writev readv size > 128k ...passed 00:06:13.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.487 Test: blockdev comparev and writev ...[2024-10-27 11:20:58.721319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd834000 len:0x1000 00:06:13.487 [2024-10-27 11:20:58.721384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:13.487 passed 00:06:13.487 Test: blockdev nvme passthru rw ...passed 00:06:13.487 Test: blockdev nvme passthru vendor specific ...passed 00:06:13.487 Test: blockdev nvme admin passthru ...[2024-10-27 11:20:58.722270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:13.487 [2024-10-27 11:20:58.722336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:13.487 passed 00:06:13.487 Test: blockdev copy ...passed 00:06:13.487 Suite: bdevio tests on: Nvme0n1 00:06:13.487 Test: blockdev write read block ...passed 00:06:13.487 Test: blockdev write zeroes read block ...passed 00:06:13.487 Test: blockdev write zeroes read no split ...passed 00:06:13.487 Test: blockdev write zeroes read split ...passed 00:06:13.749 Test: blockdev write zeroes read split partial ...passed 00:06:13.749 Test: blockdev reset ...[2024-10-27 11:20:58.782694] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:13.749 [2024-10-27 11:20:58.787095] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:13.749 passed 00:06:13.749 Test: blockdev write read 8 blocks ...passed 00:06:13.749 Test: blockdev write read size > 128k ...passed 00:06:13.749 Test: blockdev write read invalid size ...passed 00:06:13.749 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.749 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.749 Test: blockdev write read max offset ...passed 00:06:13.749 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.749 Test: blockdev writev readv 8 blocks ...passed 00:06:13.749 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.749 Test: blockdev writev readv block ...passed 00:06:13.749 Test: blockdev writev readv size > 128k ...passed 00:06:13.749 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.749 Test: blockdev comparev and writev ...passed 00:06:13.749 Test: blockdev nvme passthru rw ...[2024-10-27 11:20:58.796466] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:13.749 separate metadata which is not supported yet. 00:06:13.749 passed 00:06:13.749 Test: blockdev nvme passthru vendor specific ...[2024-10-27 11:20:58.796879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:13.749 [2024-10-27 11:20:58.796927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:13.749 passed 00:06:13.749 Test: blockdev nvme admin passthru ...passed 00:06:13.749 Test: blockdev copy ...passed 00:06:13.749 00:06:13.749 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.749 suites 6 6 n/a 0 0 00:06:13.749 tests 138 138 138 0 0 00:06:13.749 asserts 893 893 893 0 n/a 00:06:13.749 00:06:13.749 Elapsed time = 1.255 seconds 00:06:13.749 0 00:06:13.749 11:20:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59818 00:06:13.749 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 59818 ']' 00:06:13.749 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 59818 00:06:13.749 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:13.749 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.749 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59818 00:06:13.749 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.749 killing process with pid 59818 00:06:13.749 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.749 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59818' 00:06:13.749 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 59818 00:06:13.749 11:20:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 59818 00:06:14.315 11:20:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:14.315 00:06:14.315 real 0m2.213s 00:06:14.315 user 0m5.702s 00:06:14.315 sys 0m0.279s 00:06:14.315 ************************************ 00:06:14.315 END TEST bdev_bounds 00:06:14.315 ************************************ 00:06:14.315 11:20:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.315 11:20:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:14.315 11:20:59 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:14.315 11:20:59 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:14.315 11:20:59 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.315 11:20:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.315 ************************************ 00:06:14.315 START TEST bdev_nbd 00:06:14.315 ************************************ 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59873 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59873 /var/tmp/spdk-nbd.sock 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 59873 ']' 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.316 11:20:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:14.574 [2024-10-27 11:20:59.640464] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:14.574 [2024-10-27 11:20:59.640580] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.574 [2024-10-27 11:20:59.797140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.831 [2024-10-27 11:20:59.873538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.398 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.657 1+0 records in 00:06:15.657 1+0 records out 00:06:15.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233459 s, 17.5 MB/s 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.657 1+0 records in 00:06:15.657 1+0 records out 00:06:15.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276391 s, 14.8 MB/s 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:15.657 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.916 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.916 11:21:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:15.916 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.916 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.916 11:21:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.916 1+0 records in 00:06:15.916 1+0 records out 00:06:15.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304458 s, 13.5 MB/s 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.916 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.177 1+0 records in 00:06:16.177 1+0 records out 00:06:16.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005191 s, 7.9 MB/s 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.177 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:16.435 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:16.435 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:16.435 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:16.435 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.436 1+0 records in 00:06:16.436 1+0 records out 00:06:16.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040358 s, 10.1 MB/s 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.436 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:16.695 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.696 1+0 records in 00:06:16.696 1+0 records out 00:06:16.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366733 s, 11.2 MB/s 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.696 11:21:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.957 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:16.957 { 00:06:16.957 "nbd_device": "/dev/nbd0", 00:06:16.957 "bdev_name": "Nvme0n1" 00:06:16.957 }, 00:06:16.957 { 00:06:16.957 "nbd_device": "/dev/nbd1", 00:06:16.957 "bdev_name": "Nvme1n1" 00:06:16.957 }, 00:06:16.957 { 00:06:16.957 "nbd_device": "/dev/nbd2", 00:06:16.957 "bdev_name": "Nvme2n1" 00:06:16.957 }, 00:06:16.957 { 00:06:16.957 "nbd_device": "/dev/nbd3", 00:06:16.957 "bdev_name": "Nvme2n2" 00:06:16.957 }, 00:06:16.957 { 00:06:16.958 "nbd_device": "/dev/nbd4", 00:06:16.958 "bdev_name": "Nvme2n3" 00:06:16.958 }, 00:06:16.958 { 00:06:16.958 "nbd_device": "/dev/nbd5", 00:06:16.958 "bdev_name": "Nvme3n1" 00:06:16.958 } 00:06:16.958 ]' 00:06:16.958 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:16.958 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:16.958 { 00:06:16.958 "nbd_device": "/dev/nbd0", 00:06:16.958 "bdev_name": "Nvme0n1" 00:06:16.958 }, 00:06:16.958 { 00:06:16.958 "nbd_device": "/dev/nbd1", 00:06:16.958 "bdev_name": "Nvme1n1" 00:06:16.958 }, 00:06:16.958 { 00:06:16.958 "nbd_device": "/dev/nbd2", 00:06:16.958 "bdev_name": "Nvme2n1" 00:06:16.958 }, 00:06:16.958 { 00:06:16.958 "nbd_device": "/dev/nbd3", 00:06:16.958 "bdev_name": "Nvme2n2" 00:06:16.958 }, 00:06:16.958 { 00:06:16.958 "nbd_device": "/dev/nbd4", 00:06:16.958 "bdev_name": "Nvme2n3" 00:06:16.958 }, 00:06:16.958 { 00:06:16.958 "nbd_device": "/dev/nbd5", 00:06:16.958 "bdev_name": "Nvme3n1" 00:06:16.958 } 00:06:16.958 ]' 00:06:16.958 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:16.958 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:16.958 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.958 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:16.958 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.958 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:16.958 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.958 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.219 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:17.479 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:17.479 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:17.479 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:17.479 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.479 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.479 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:17.479 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.479 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.479 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.479 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:17.740 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:17.740 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:17.740 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:17.740 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.740 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.740 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:17.740 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.740 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.740 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.740 11:21:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.013 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.014 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.014 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.278 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:18.537 /dev/nbd0 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.537 1+0 records in 00:06:18.537 1+0 records out 00:06:18.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595354 s, 6.9 MB/s 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.537 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:18.798 /dev/nbd1 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.798 1+0 records in 00:06:18.798 1+0 records out 00:06:18.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441181 s, 9.3 MB/s 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.798 11:21:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:19.060 /dev/nbd10 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.060 1+0 records in 00:06:19.060 1+0 records out 00:06:19.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504066 s, 8.1 MB/s 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.060 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:19.320 /dev/nbd11 00:06:19.320 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:19.320 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:19.320 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:19.320 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.320 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.320 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.320 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:19.320 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.321 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.321 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.321 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.321 1+0 records in 00:06:19.321 1+0 records out 00:06:19.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337331 s, 12.1 MB/s 00:06:19.321 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.321 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.321 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.321 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.321 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.321 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.321 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.321 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:19.582 /dev/nbd12 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.582 1+0 records in 00:06:19.582 1+0 records out 00:06:19.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459622 s, 8.9 MB/s 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:19.582 /dev/nbd13 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.582 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.844 1+0 records in 00:06:19.844 1+0 records out 00:06:19.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426804 s, 9.6 MB/s 00:06:19.844 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.844 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:19.844 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.844 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.844 11:21:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:19.844 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.844 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.844 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.844 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.844 11:21:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.844 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.844 { 00:06:19.844 "nbd_device": "/dev/nbd0", 00:06:19.844 "bdev_name": "Nvme0n1" 00:06:19.844 }, 00:06:19.844 { 00:06:19.844 "nbd_device": "/dev/nbd1", 00:06:19.844 "bdev_name": "Nvme1n1" 00:06:19.844 }, 00:06:19.844 { 00:06:19.844 "nbd_device": "/dev/nbd10", 00:06:19.844 "bdev_name": "Nvme2n1" 00:06:19.844 }, 00:06:19.844 { 00:06:19.844 "nbd_device": "/dev/nbd11", 00:06:19.844 "bdev_name": "Nvme2n2" 00:06:19.844 }, 00:06:19.844 { 00:06:19.844 "nbd_device": "/dev/nbd12", 00:06:19.844 "bdev_name": "Nvme2n3" 00:06:19.844 }, 00:06:19.844 { 00:06:19.844 "nbd_device": "/dev/nbd13", 00:06:19.844 "bdev_name": "Nvme3n1" 00:06:19.844 } 00:06:19.844 ]' 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.845 { 00:06:19.845 "nbd_device": "/dev/nbd0", 00:06:19.845 "bdev_name": "Nvme0n1" 00:06:19.845 }, 00:06:19.845 { 00:06:19.845 "nbd_device": "/dev/nbd1", 00:06:19.845 "bdev_name": "Nvme1n1" 00:06:19.845 }, 00:06:19.845 { 00:06:19.845 "nbd_device": "/dev/nbd10", 00:06:19.845 "bdev_name": "Nvme2n1" 00:06:19.845 }, 00:06:19.845 { 00:06:19.845 "nbd_device": "/dev/nbd11", 00:06:19.845 "bdev_name": "Nvme2n2" 00:06:19.845 }, 00:06:19.845 { 00:06:19.845 "nbd_device": "/dev/nbd12", 00:06:19.845 "bdev_name": "Nvme2n3" 00:06:19.845 }, 00:06:19.845 { 00:06:19.845 "nbd_device": "/dev/nbd13", 00:06:19.845 "bdev_name": "Nvme3n1" 00:06:19.845 } 00:06:19.845 ]' 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.845 /dev/nbd1 00:06:19.845 /dev/nbd10 00:06:19.845 /dev/nbd11 00:06:19.845 /dev/nbd12 00:06:19.845 /dev/nbd13' 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.845 /dev/nbd1 00:06:19.845 /dev/nbd10 00:06:19.845 /dev/nbd11 00:06:19.845 /dev/nbd12 00:06:19.845 /dev/nbd13' 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.845 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:20.107 256+0 records in 00:06:20.107 256+0 records out 00:06:20.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0080593 s, 130 MB/s 00:06:20.107 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.107 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.107 256+0 records in 00:06:20.107 256+0 records out 00:06:20.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0516941 s, 20.3 MB/s 00:06:20.107 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.107 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.107 256+0 records in 00:06:20.107 256+0 records out 00:06:20.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0517845 s, 20.2 MB/s 00:06:20.107 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.107 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:20.107 256+0 records in 00:06:20.107 256+0 records out 00:06:20.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0539849 s, 19.4 MB/s 00:06:20.107 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.107 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:20.107 256+0 records in 00:06:20.107 256+0 records out 00:06:20.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.051063 s, 20.5 MB/s 00:06:20.107 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.107 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:20.368 256+0 records in 00:06:20.368 256+0 records out 00:06:20.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0559806 s, 18.7 MB/s 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:20.368 256+0 records in 00:06:20.368 256+0 records out 00:06:20.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0511971 s, 20.5 MB/s 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:20.368 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.369 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:20.369 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:20.369 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:20.369 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.369 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.369 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.369 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:20.369 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.369 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.630 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.891 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.891 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.891 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.891 11:21:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:20.891 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:20.891 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:20.891 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:20.891 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.891 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.891 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:20.891 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.891 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.891 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.891 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:21.152 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:21.152 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:21.152 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:21.152 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.152 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.152 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:21.152 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.152 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.152 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.152 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:21.412 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:21.412 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:21.412 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:21.412 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.412 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.412 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:21.412 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.412 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.412 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.412 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:21.673 11:21:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:21.934 malloc_lvol_verify 00:06:21.934 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:22.195 3acb6e38-545b-42b8-bd35-cdb3f0cf0afc 00:06:22.195 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:22.456 a111e357-ffa7-4f1f-981d-639c73f4c695 00:06:22.456 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:22.456 /dev/nbd0 00:06:22.456 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:22.456 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:22.456 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:22.456 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:22.456 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:22.716 mke2fs 1.47.0 (5-Feb-2023) 00:06:22.717 Discarding device blocks: 0/4096 done 00:06:22.717 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:22.717 00:06:22.717 Allocating group tables: 0/1 done 00:06:22.717 Writing inode tables: 0/1 done 00:06:22.717 Creating journal (1024 blocks): done 00:06:22.717 Writing superblocks and filesystem accounting information: 0/1 done 00:06:22.717 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59873 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 59873 ']' 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 59873 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59873 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.717 killing process with pid 59873 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59873' 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 59873 00:06:22.717 11:21:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 59873 00:06:23.286 11:21:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:23.286 00:06:23.286 real 0m8.977s 00:06:23.286 user 0m13.142s 00:06:23.286 sys 0m2.754s 00:06:23.286 11:21:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.286 11:21:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:23.286 ************************************ 00:06:23.286 END TEST bdev_nbd 00:06:23.286 ************************************ 00:06:23.548 skipping fio tests on NVMe due to multi-ns failures. 00:06:23.548 11:21:08 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:23.548 11:21:08 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:23.548 11:21:08 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:23.548 11:21:08 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:23.548 11:21:08 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.548 11:21:08 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:23.548 11:21:08 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.548 11:21:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.548 ************************************ 00:06:23.548 START TEST bdev_verify 00:06:23.548 ************************************ 00:06:23.548 11:21:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.548 [2024-10-27 11:21:08.686498] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:23.548 [2024-10-27 11:21:08.686639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60238 ] 00:06:23.810 [2024-10-27 11:21:08.850908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.810 [2024-10-27 11:21:08.976705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.810 [2024-10-27 11:21:08.976819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.382 Running I/O for 5 seconds... 00:06:26.817 22400.00 IOPS, 87.50 MiB/s [2024-10-27T11:21:13.033Z] 23456.00 IOPS, 91.62 MiB/s [2024-10-27T11:21:13.974Z] 23424.00 IOPS, 91.50 MiB/s [2024-10-27T11:21:14.913Z] 23504.00 IOPS, 91.81 MiB/s [2024-10-27T11:21:14.913Z] 23206.40 IOPS, 90.65 MiB/s 00:06:29.632 Latency(us) 00:06:29.632 [2024-10-27T11:21:14.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:29.632 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0x0 length 0xbd0bd 00:06:29.632 Nvme0n1 : 5.06 1794.55 7.01 0.00 0.00 71000.82 12149.37 90742.15 00:06:29.632 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:29.632 Nvme0n1 : 5.07 2008.42 7.85 0.00 0.00 63409.23 10284.11 91548.75 00:06:29.632 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0x0 length 0xa0000 00:06:29.632 Nvme1n1 : 5.07 1794.09 7.01 0.00 0.00 70937.91 14417.92 85095.98 00:06:29.632 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0xa0000 length 0xa0000 00:06:29.632 Nvme1n1 : 5.08 2014.01 7.87 0.00 0.00 63180.83 12552.66 77433.30 00:06:29.632 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0x0 length 0x80000 00:06:29.632 Nvme2n1 : 5.09 1799.40 7.03 0.00 0.00 70414.73 5494.94 64124.46 00:06:29.632 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0x80000 length 0x80000 00:06:29.632 Nvme2n1 : 5.09 2013.38 7.86 0.00 0.00 62987.74 14014.62 67754.14 00:06:29.632 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0x0 length 0x80000 00:06:29.632 Nvme2n2 : 5.10 1806.77 7.06 0.00 0.00 70032.89 9931.22 60494.77 00:06:29.632 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0x80000 length 0x80000 00:06:29.632 Nvme2n2 : 5.09 2011.80 7.86 0.00 0.00 62907.54 13712.15 59284.87 00:06:29.632 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0x0 length 0x80000 00:06:29.632 Nvme2n3 : 5.10 1806.30 7.06 0.00 0.00 69917.48 10132.87 64124.46 00:06:29.632 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0x80000 length 0x80000 00:06:29.632 Nvme2n3 : 5.09 2011.18 7.86 0.00 0.00 62812.77 12098.95 64931.05 00:06:29.632 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0x0 length 0x20000 00:06:29.632 Nvme3n1 : 5.10 1805.84 7.05 0.00 0.00 69814.11 10082.46 67350.84 00:06:29.632 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.632 Verification LBA range: start 0x20000 length 0x20000 00:06:29.632 Nvme3n1 : 5.09 2010.56 7.85 0.00 0.00 62717.41 10737.82 68964.04 00:06:29.632 [2024-10-27T11:21:14.913Z] =================================================================================================================== 00:06:29.632 [2024-10-27T11:21:14.913Z] Total : 22876.30 89.36 0.00 0.00 66474.02 5494.94 91548.75 00:06:31.000 00:06:31.000 real 0m7.458s 00:06:31.000 user 0m13.461s 00:06:31.000 sys 0m0.287s 00:06:31.000 11:21:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.000 11:21:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:31.000 ************************************ 00:06:31.000 END TEST bdev_verify 00:06:31.000 ************************************ 00:06:31.000 11:21:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.000 11:21:16 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:31.000 11:21:16 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.000 11:21:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.000 ************************************ 00:06:31.000 START TEST bdev_verify_big_io 00:06:31.000 ************************************ 00:06:31.000 11:21:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.000 [2024-10-27 11:21:16.190844] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:31.000 [2024-10-27 11:21:16.190959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60336 ] 00:06:31.258 [2024-10-27 11:21:16.353075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.258 [2024-10-27 11:21:16.450684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.258 [2024-10-27 11:21:16.450758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.195 Running I/O for 5 seconds... 00:06:38.032 1429.00 IOPS, 89.31 MiB/s [2024-10-27T11:21:23.313Z] 2926.50 IOPS, 182.91 MiB/s 00:06:38.032 Latency(us) 00:06:38.032 [2024-10-27T11:21:23.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.032 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0x0 length 0xbd0b 00:06:38.032 Nvme0n1 : 5.60 125.82 7.86 0.00 0.00 979767.64 33272.12 1064707.94 00:06:38.032 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:38.032 Nvme0n1 : 5.68 117.39 7.34 0.00 0.00 1047944.75 34078.72 1238932.87 00:06:38.032 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0x0 length 0xa000 00:06:38.032 Nvme1n1 : 5.70 130.86 8.18 0.00 0.00 922397.37 42346.34 916294.10 00:06:38.032 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0xa000 length 0xa000 00:06:38.032 Nvme1n1 : 5.88 117.57 7.35 0.00 0.00 996552.83 117763.15 1303460.63 00:06:38.032 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0x0 length 0x8000 00:06:38.032 Nvme2n1 : 5.70 130.49 8.16 0.00 0.00 893815.91 42346.34 929199.66 00:06:38.032 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0x8000 length 0x8000 00:06:38.032 Nvme2n1 : 5.88 112.95 7.06 0.00 0.00 996215.23 118569.75 1793871.56 00:06:38.032 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0x0 length 0x8000 00:06:38.032 Nvme2n2 : 5.70 134.66 8.42 0.00 0.00 844259.91 60898.07 1019538.51 00:06:38.032 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0x8000 length 0x8000 00:06:38.032 Nvme2n2 : 5.95 126.20 7.89 0.00 0.00 870856.10 14216.27 1819682.66 00:06:38.032 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0x0 length 0x8000 00:06:38.032 Nvme2n3 : 5.89 147.63 9.23 0.00 0.00 745118.74 40329.85 967916.31 00:06:38.032 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0x8000 length 0x8000 00:06:38.032 Nvme2n3 : 5.98 139.40 8.71 0.00 0.00 759881.81 8519.68 1445421.69 00:06:38.032 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0x0 length 0x2000 00:06:38.032 Nvme3n1 : 5.94 167.99 10.50 0.00 0.00 637245.50 983.04 1058255.16 00:06:38.032 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.032 Verification LBA range: start 0x2000 length 0x2000 00:06:38.032 Nvme3n1 : 6.09 218.53 13.66 0.00 0.00 474992.35 374.94 1477685.56 00:06:38.032 [2024-10-27T11:21:23.313Z] =================================================================================================================== 00:06:38.032 [2024-10-27T11:21:23.313Z] Total : 1669.50 104.34 0.00 0.00 813369.49 374.94 1819682.66 00:06:39.939 00:06:39.939 real 0m8.746s 00:06:39.939 user 0m15.981s 00:06:39.939 sys 0m0.223s 00:06:39.939 ************************************ 00:06:39.939 END TEST bdev_verify_big_io 00:06:39.939 ************************************ 00:06:39.939 11:21:24 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.939 11:21:24 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:39.939 11:21:24 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.939 11:21:24 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:39.939 11:21:24 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.939 11:21:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:39.939 ************************************ 00:06:39.939 START TEST bdev_write_zeroes 00:06:39.939 ************************************ 00:06:39.939 11:21:24 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.939 [2024-10-27 11:21:24.980905] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:39.939 [2024-10-27 11:21:24.981025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60445 ] 00:06:39.939 [2024-10-27 11:21:25.138947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.197 [2024-10-27 11:21:25.238669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.767 Running I/O for 1 seconds... 00:06:41.713 64128.00 IOPS, 250.50 MiB/s 00:06:41.713 Latency(us) 00:06:41.713 [2024-10-27T11:21:26.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.713 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.713 Nvme0n1 : 1.02 10659.08 41.64 0.00 0.00 11985.58 5041.23 24702.03 00:06:41.713 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.713 Nvme1n1 : 1.02 10646.80 41.59 0.00 0.00 11983.87 9326.28 24802.86 00:06:41.713 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.713 Nvme2n1 : 1.02 10634.58 41.54 0.00 0.00 11962.00 9124.63 24097.08 00:06:41.713 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.713 Nvme2n2 : 1.02 10622.59 41.49 0.00 0.00 11916.97 9225.45 20366.57 00:06:41.713 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.713 Nvme2n3 : 1.03 10610.62 41.45 0.00 0.00 11914.62 9275.86 21072.34 00:06:41.713 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:41.713 Nvme3n1 : 1.03 10598.62 41.40 0.00 0.00 11866.85 8217.21 20164.92 00:06:41.713 [2024-10-27T11:21:26.994Z] =================================================================================================================== 00:06:41.713 [2024-10-27T11:21:26.994Z] Total : 63772.29 249.11 0.00 0.00 11938.31 5041.23 24802.86 00:06:42.661 00:06:42.661 real 0m2.723s 00:06:42.661 user 0m2.433s 00:06:42.661 sys 0m0.175s 00:06:42.661 ************************************ 00:06:42.661 END TEST bdev_write_zeroes 00:06:42.661 ************************************ 00:06:42.661 11:21:27 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.661 11:21:27 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 11:21:27 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.661 11:21:27 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:42.661 11:21:27 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.661 11:21:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 ************************************ 00:06:42.661 START TEST bdev_json_nonenclosed 00:06:42.661 ************************************ 00:06:42.661 11:21:27 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.661 [2024-10-27 11:21:27.778428] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:42.661 [2024-10-27 11:21:27.778569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60498 ] 00:06:43.014 [2024-10-27 11:21:27.944851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.014 [2024-10-27 11:21:28.066994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.014 [2024-10-27 11:21:28.067102] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:43.014 [2024-10-27 11:21:28.067122] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:43.014 [2024-10-27 11:21:28.067132] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.014 00:06:43.014 real 0m0.553s 00:06:43.014 user 0m0.329s 00:06:43.014 sys 0m0.118s 00:06:43.014 ************************************ 00:06:43.014 END TEST bdev_json_nonenclosed 00:06:43.014 ************************************ 00:06:43.014 11:21:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.014 11:21:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:43.279 11:21:28 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.279 11:21:28 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:43.279 11:21:28 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.279 11:21:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.279 ************************************ 00:06:43.279 START TEST bdev_json_nonarray 00:06:43.279 ************************************ 00:06:43.280 11:21:28 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.280 [2024-10-27 11:21:28.395751] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:43.280 [2024-10-27 11:21:28.395898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60524 ] 00:06:43.542 [2024-10-27 11:21:28.561681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.542 [2024-10-27 11:21:28.682473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.542 [2024-10-27 11:21:28.682587] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:43.542 [2024-10-27 11:21:28.682607] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:43.542 [2024-10-27 11:21:28.682618] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.805 00:06:43.805 real 0m0.556s 00:06:43.805 user 0m0.333s 00:06:43.805 sys 0m0.117s 00:06:43.805 11:21:28 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.805 ************************************ 00:06:43.805 END TEST bdev_json_nonarray 00:06:43.805 ************************************ 00:06:43.805 11:21:28 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:43.805 11:21:28 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:43.805 11:21:28 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:43.805 11:21:28 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:43.805 11:21:28 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:43.805 11:21:28 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:43.805 11:21:28 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:43.805 11:21:28 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:43.805 11:21:28 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:43.805 11:21:28 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:43.805 11:21:28 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:43.805 11:21:28 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:43.805 00:06:43.805 real 0m36.192s 00:06:43.805 user 0m55.645s 00:06:43.805 sys 0m4.963s 00:06:43.805 11:21:28 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.805 11:21:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.805 ************************************ 00:06:43.805 END TEST blockdev_nvme 00:06:43.805 ************************************ 00:06:43.805 11:21:28 -- spdk/autotest.sh@209 -- # uname -s 00:06:43.805 11:21:28 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:43.805 11:21:28 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:43.805 11:21:28 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:43.805 11:21:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.805 11:21:28 -- common/autotest_common.sh@10 -- # set +x 00:06:43.805 ************************************ 00:06:43.805 START TEST blockdev_nvme_gpt 00:06:43.805 ************************************ 00:06:43.805 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:43.805 * Looking for test storage... 00:06:43.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:43.805 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:43.805 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # lcov --version 00:06:43.805 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:44.067 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.067 11:21:29 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:44.067 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.067 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:44.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.067 --rc genhtml_branch_coverage=1 00:06:44.067 --rc genhtml_function_coverage=1 00:06:44.067 --rc genhtml_legend=1 00:06:44.067 --rc geninfo_all_blocks=1 00:06:44.067 --rc geninfo_unexecuted_blocks=1 00:06:44.067 00:06:44.067 ' 00:06:44.067 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:44.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.067 --rc genhtml_branch_coverage=1 00:06:44.067 --rc genhtml_function_coverage=1 00:06:44.067 --rc genhtml_legend=1 00:06:44.067 --rc geninfo_all_blocks=1 00:06:44.067 --rc geninfo_unexecuted_blocks=1 00:06:44.067 00:06:44.067 ' 00:06:44.067 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:44.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.067 --rc genhtml_branch_coverage=1 00:06:44.067 --rc genhtml_function_coverage=1 00:06:44.067 --rc genhtml_legend=1 00:06:44.067 --rc geninfo_all_blocks=1 00:06:44.067 --rc geninfo_unexecuted_blocks=1 00:06:44.067 00:06:44.067 ' 00:06:44.067 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:44.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.067 --rc genhtml_branch_coverage=1 00:06:44.067 --rc genhtml_function_coverage=1 00:06:44.067 --rc genhtml_legend=1 00:06:44.067 --rc geninfo_all_blocks=1 00:06:44.067 --rc geninfo_unexecuted_blocks=1 00:06:44.067 00:06:44.067 ' 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:44.067 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60602 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:44.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60602 00:06:44.068 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 60602 ']' 00:06:44.068 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.068 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.068 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.068 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.068 11:21:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.068 11:21:29 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:44.068 [2024-10-27 11:21:29.253692] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:44.068 [2024-10-27 11:21:29.254242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60602 ] 00:06:44.330 [2024-10-27 11:21:29.419612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.330 [2024-10-27 11:21:29.542538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.275 11:21:30 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.275 11:21:30 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:06:45.275 11:21:30 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:45.275 11:21:30 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:45.275 11:21:30 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:45.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:45.537 Waiting for block devices as requested 00:06:45.537 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.537 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.799 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.799 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:51.092 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1654 -- # local nvme bdf 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n2 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n2 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n3 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n3 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3c3n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme3c3n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:51.092 BYT; 00:06:51.092 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:51.092 BYT; 00:06:51.092 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:51.092 11:21:36 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:51.092 11:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:52.466 The operation has completed successfully. 00:06:52.466 11:21:37 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:53.399 The operation has completed successfully. 00:06:53.399 11:21:38 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:53.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:54.224 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.224 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.224 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.224 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.224 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:54.224 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.224 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.224 [] 00:06:54.224 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.224 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:54.224 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:54.224 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:54.224 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:54.224 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:54.224 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.224 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.481 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.481 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:54.481 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.481 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.481 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.481 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:54.481 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:54.481 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.481 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.481 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.481 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:54.481 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.481 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.481 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.481 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:54.740 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.740 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.740 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.740 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:54.740 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:54.740 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:54.740 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.740 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.740 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.740 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:54.740 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:54.740 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "05794856-060b-4eaa-9181-5dcd06b5bf09"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "05794856-060b-4eaa-9181-5dcd06b5bf09",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "70267058-9d12-4ff9-9c9a-e9bee04380be"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "70267058-9d12-4ff9-9c9a-e9bee04380be",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a4f95ad7-79a7-4ad7-a833-fd726477ebca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a4f95ad7-79a7-4ad7-a833-fd726477ebca",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8b5d2ea8-a42c-49b3-a7f5-16395c64b984"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8b5d2ea8-a42c-49b3-a7f5-16395c64b984",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "89fa1300-dd1e-4830-bfb9-768677228ae6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "89fa1300-dd1e-4830-bfb9-768677228ae6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:54.740 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:54.741 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:54.741 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:54.741 11:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60602 00:06:54.741 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 60602 ']' 00:06:54.741 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 60602 00:06:54.741 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:06:54.741 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.741 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60602 00:06:54.741 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.741 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.741 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60602' 00:06:54.741 killing process with pid 60602 00:06:54.741 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 60602 00:06:54.741 11:21:39 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 60602 00:06:56.115 11:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:56.115 11:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:56.115 11:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:56.115 11:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.115 11:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.115 ************************************ 00:06:56.115 START TEST bdev_hello_world 00:06:56.115 ************************************ 00:06:56.115 11:21:41 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:56.115 [2024-10-27 11:21:41.123728] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:56.115 [2024-10-27 11:21:41.123842] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61227 ] 00:06:56.115 [2024-10-27 11:21:41.280785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.116 [2024-10-27 11:21:41.355216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.683 [2024-10-27 11:21:41.843189] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:56.683 [2024-10-27 11:21:41.843227] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:56.683 [2024-10-27 11:21:41.843241] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:56.683 [2024-10-27 11:21:41.845141] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:56.683 [2024-10-27 11:21:41.845690] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:56.683 [2024-10-27 11:21:41.845715] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:56.683 [2024-10-27 11:21:41.845941] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:56.683 00:06:56.683 [2024-10-27 11:21:41.845965] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:57.250 00:06:57.250 real 0m1.332s 00:06:57.250 user 0m1.075s 00:06:57.250 sys 0m0.153s 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.250 ************************************ 00:06:57.250 END TEST bdev_hello_world 00:06:57.250 ************************************ 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:57.250 11:21:42 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:57.250 11:21:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:57.250 11:21:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.250 11:21:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.250 ************************************ 00:06:57.250 START TEST bdev_bounds 00:06:57.250 ************************************ 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:57.250 Process bdevio pid: 61260 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61260 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61260' 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61260 00:06:57.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61260 ']' 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:57.250 11:21:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:57.250 [2024-10-27 11:21:42.503656] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:57.250 [2024-10-27 11:21:42.503745] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61260 ] 00:06:57.508 [2024-10-27 11:21:42.653776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.508 [2024-10-27 11:21:42.731882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.508 [2024-10-27 11:21:42.732450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.508 [2024-10-27 11:21:42.732481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.076 11:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.076 11:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:58.076 11:21:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:58.333 I/O targets: 00:06:58.333 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:58.333 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:58.333 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:58.333 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.333 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.333 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.333 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:58.333 00:06:58.333 00:06:58.333 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.333 http://cunit.sourceforge.net/ 00:06:58.333 00:06:58.333 00:06:58.333 Suite: bdevio tests on: Nvme3n1 00:06:58.333 Test: blockdev write read block ...passed 00:06:58.333 Test: blockdev write zeroes read block ...passed 00:06:58.333 Test: blockdev write zeroes read no split ...passed 00:06:58.333 Test: blockdev write zeroes read split ...passed 00:06:58.333 Test: blockdev write zeroes read split partial ...passed 00:06:58.333 Test: blockdev reset ...[2024-10-27 11:21:43.485443] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:58.333 [2024-10-27 11:21:43.489332] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:58.333 passed 00:06:58.333 Test: blockdev write read 8 blocks ...passed 00:06:58.333 Test: blockdev write read size > 128k ...passed 00:06:58.333 Test: blockdev write read invalid size ...passed 00:06:58.333 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.333 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.333 Test: blockdev write read max offset ...passed 00:06:58.333 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.333 Test: blockdev writev readv 8 blocks ...passed 00:06:58.333 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.333 Test: blockdev writev readv block ...passed 00:06:58.333 Test: blockdev writev readv size > 128k ...passed 00:06:58.333 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.333 Test: blockdev comparev and writev ...[2024-10-27 11:21:43.508564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0004000 len:0x1000 00:06:58.333 [2024-10-27 11:21:43.508674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.333 passed 00:06:58.333 Test: blockdev nvme passthru rw ...passed 00:06:58.333 Test: blockdev nvme passthru vendor specific ...[2024-10-27 11:21:43.511149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.333 [2024-10-27 11:21:43.511214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.333 passed 00:06:58.333 Test: blockdev nvme admin passthru ...passed 00:06:58.333 Test: blockdev copy ...passed 00:06:58.333 Suite: bdevio tests on: Nvme2n3 00:06:58.333 Test: blockdev write read block ...passed 00:06:58.333 Test: blockdev write zeroes read block ...passed 00:06:58.333 Test: blockdev write zeroes read no split ...passed 00:06:58.333 Test: blockdev write zeroes read split ...passed 00:06:58.333 Test: blockdev write zeroes read split partial ...passed 00:06:58.333 Test: blockdev reset ...[2024-10-27 11:21:43.567384] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:58.333 [2024-10-27 11:21:43.570970] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:58.333 passed 00:06:58.333 Test: blockdev write read 8 blocks ...passed 00:06:58.333 Test: blockdev write read size > 128k ...passed 00:06:58.333 Test: blockdev write read invalid size ...passed 00:06:58.333 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.333 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.333 Test: blockdev write read max offset ...passed 00:06:58.333 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.333 Test: blockdev writev readv 8 blocks ...passed 00:06:58.333 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.333 Test: blockdev writev readv block ...passed 00:06:58.333 Test: blockdev writev readv size > 128k ...passed 00:06:58.333 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.333 Test: blockdev comparev and writev ...[2024-10-27 11:21:43.587114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0002000 len:0x1000 00:06:58.333 [2024-10-27 11:21:43.587159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.333 passed 00:06:58.333 Test: blockdev nvme passthru rw ...passed 00:06:58.333 Test: blockdev nvme passthru vendor specific ...[2024-10-27 11:21:43.589389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.333 [2024-10-27 11:21:43.589421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.333 passed 00:06:58.333 Test: blockdev nvme admin passthru ...passed 00:06:58.333 Test: blockdev copy ...passed 00:06:58.333 Suite: bdevio tests on: Nvme2n2 00:06:58.333 Test: blockdev write read block ...passed 00:06:58.333 Test: blockdev write zeroes read block ...passed 00:06:58.333 Test: blockdev write zeroes read no split ...passed 00:06:58.591 Test: blockdev write zeroes read split ...passed 00:06:58.591 Test: blockdev write zeroes read split partial ...passed 00:06:58.591 Test: blockdev reset ...[2024-10-27 11:21:43.649244] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:58.591 [2024-10-27 11:21:43.653121] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:58.591 passed 00:06:58.591 Test: blockdev write read 8 blocks ...passed 00:06:58.591 Test: blockdev write read size > 128k ...passed 00:06:58.591 Test: blockdev write read invalid size ...passed 00:06:58.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.591 Test: blockdev write read max offset ...passed 00:06:58.591 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.591 Test: blockdev writev readv 8 blocks ...passed 00:06:58.591 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.591 Test: blockdev writev readv block ...passed 00:06:58.591 Test: blockdev writev readv size > 128k ...passed 00:06:58.591 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.591 Test: blockdev comparev and writev ...[2024-10-27 11:21:43.669512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5438000 len:0x1000 00:06:58.591 [2024-10-27 11:21:43.669558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.591 passed 00:06:58.591 Test: blockdev nvme passthru rw ...passed 00:06:58.591 Test: blockdev nvme passthru vendor specific ...[2024-10-27 11:21:43.671555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.591 [2024-10-27 11:21:43.671583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.591 passed 00:06:58.591 Test: blockdev nvme admin passthru ...passed 00:06:58.591 Test: blockdev copy ...passed 00:06:58.591 Suite: bdevio tests on: Nvme2n1 00:06:58.591 Test: blockdev write read block ...passed 00:06:58.591 Test: blockdev write zeroes read block ...passed 00:06:58.591 Test: blockdev write zeroes read no split ...passed 00:06:58.591 Test: blockdev write zeroes read split ...passed 00:06:58.591 Test: blockdev write zeroes read split partial ...passed 00:06:58.591 Test: blockdev reset ...[2024-10-27 11:21:43.730981] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:58.591 [2024-10-27 11:21:43.734355] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:58.591 passed 00:06:58.591 Test: blockdev write read 8 blocks ...passed 00:06:58.591 Test: blockdev write read size > 128k ...passed 00:06:58.591 Test: blockdev write read invalid size ...passed 00:06:58.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.591 Test: blockdev write read max offset ...passed 00:06:58.591 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.591 Test: blockdev writev readv 8 blocks ...passed 00:06:58.591 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.591 Test: blockdev writev readv block ...passed 00:06:58.591 Test: blockdev writev readv size > 128k ...passed 00:06:58.591 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.591 Test: blockdev comparev and writev ...[2024-10-27 11:21:43.751461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5434000 len:0x1000 00:06:58.591 [2024-10-27 11:21:43.751511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.591 passed 00:06:58.591 Test: blockdev nvme passthru rw ...passed 00:06:58.591 Test: blockdev nvme passthru vendor specific ...[2024-10-27 11:21:43.753801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.591 [2024-10-27 11:21:43.753830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.591 passed 00:06:58.591 Test: blockdev nvme admin passthru ...passed 00:06:58.591 Test: blockdev copy ...passed 00:06:58.591 Suite: bdevio tests on: Nvme1n1p2 00:06:58.591 Test: blockdev write read block ...passed 00:06:58.591 Test: blockdev write zeroes read block ...passed 00:06:58.591 Test: blockdev write zeroes read no split ...passed 00:06:58.591 Test: blockdev write zeroes read split ...passed 00:06:58.591 Test: blockdev write zeroes read split partial ...passed 00:06:58.591 Test: blockdev reset ...[2024-10-27 11:21:43.812997] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:58.591 [2024-10-27 11:21:43.816284] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:58.591 passed 00:06:58.591 Test: blockdev write read 8 blocks ...passed 00:06:58.591 Test: blockdev write read size > 128k ...passed 00:06:58.591 Test: blockdev write read invalid size ...passed 00:06:58.591 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.591 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.591 Test: blockdev write read max offset ...passed 00:06:58.591 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.591 Test: blockdev writev readv 8 blocks ...passed 00:06:58.591 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.591 Test: blockdev writev readv block ...passed 00:06:58.591 Test: blockdev writev readv size > 128k ...passed 00:06:58.591 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.591 Test: blockdev comparev and writev ...[2024-10-27 11:21:43.832903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d5430000 len:0x1000 00:06:58.591 [2024-10-27 11:21:43.832941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.591 passed 00:06:58.591 Test: blockdev nvme passthru rw ...passed 00:06:58.591 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.591 Test: blockdev nvme admin passthru ...passed 00:06:58.591 Test: blockdev copy ...passed 00:06:58.591 Suite: bdevio tests on: Nvme1n1p1 00:06:58.591 Test: blockdev write read block ...passed 00:06:58.591 Test: blockdev write zeroes read block ...passed 00:06:58.591 Test: blockdev write zeroes read no split ...passed 00:06:58.849 Test: blockdev write zeroes read split ...passed 00:06:58.849 Test: blockdev write zeroes read split partial ...passed 00:06:58.849 Test: blockdev reset ...[2024-10-27 11:21:43.888879] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:58.849 [2024-10-27 11:21:43.892260] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:58.849 passed 00:06:58.849 Test: blockdev write read 8 blocks ...passed 00:06:58.849 Test: blockdev write read size > 128k ...passed 00:06:58.849 Test: blockdev write read invalid size ...passed 00:06:58.849 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.849 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.849 Test: blockdev write read max offset ...passed 00:06:58.849 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.849 Test: blockdev writev readv 8 blocks ...passed 00:06:58.849 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.849 Test: blockdev writev readv block ...passed 00:06:58.849 Test: blockdev writev readv size > 128k ...passed 00:06:58.849 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.849 Test: blockdev comparev and writev ...[2024-10-27 11:21:43.908877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b0a0e000 len:0x1000 00:06:58.849 [2024-10-27 11:21:43.908920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.849 passed 00:06:58.849 Test: blockdev nvme passthru rw ...passed 00:06:58.849 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.849 Test: blockdev nvme admin passthru ...passed 00:06:58.849 Test: blockdev copy ...passed 00:06:58.849 Suite: bdevio tests on: Nvme0n1 00:06:58.849 Test: blockdev write read block ...passed 00:06:58.849 Test: blockdev write zeroes read block ...passed 00:06:58.849 Test: blockdev write zeroes read no split ...passed 00:06:58.849 Test: blockdev write zeroes read split ...passed 00:06:58.849 Test: blockdev write zeroes read split partial ...passed 00:06:58.849 Test: blockdev reset ...[2024-10-27 11:21:43.963491] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:58.849 [2024-10-27 11:21:43.966562] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:58.849 passed 00:06:58.849 Test: blockdev write read 8 blocks ...passed 00:06:58.849 Test: blockdev write read size > 128k ...passed 00:06:58.849 Test: blockdev write read invalid size ...passed 00:06:58.849 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.849 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.849 Test: blockdev write read max offset ...passed 00:06:58.849 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.849 Test: blockdev writev readv 8 blocks ...passed 00:06:58.849 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.849 Test: blockdev writev readv block ...passed 00:06:58.849 Test: blockdev writev readv size > 128k ...passed 00:06:58.849 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.849 Test: blockdev comparev and writev ...passed 00:06:58.849 Test: blockdev nvme passthru rw ...[2024-10-27 11:21:43.981686] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:58.849 separate metadata which is not supported yet. 00:06:58.849 passed 00:06:58.849 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.849 Test: blockdev nvme admin passthru ...[2024-10-27 11:21:43.983321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:58.849 [2024-10-27 11:21:43.983365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:58.849 passed 00:06:58.849 Test: blockdev copy ...passed 00:06:58.849 00:06:58.849 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.849 suites 7 7 n/a 0 0 00:06:58.849 tests 161 161 161 0 0 00:06:58.849 asserts 1025 1025 1025 0 n/a 00:06:58.849 00:06:58.849 Elapsed time = 1.387 seconds 00:06:58.849 0 00:06:58.849 11:21:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61260 00:06:58.849 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61260 ']' 00:06:58.849 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61260 00:06:58.849 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:58.849 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.849 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61260 00:06:58.849 killing process with pid 61260 00:06:58.849 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.849 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.849 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61260' 00:06:58.849 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61260 00:06:58.849 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61260 00:06:59.783 ************************************ 00:06:59.783 END TEST bdev_bounds 00:06:59.783 ************************************ 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:59.783 00:06:59.783 real 0m2.244s 00:06:59.783 user 0m5.772s 00:06:59.783 sys 0m0.258s 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:59.783 11:21:44 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:59.783 11:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:59.783 11:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.783 11:21:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:59.783 ************************************ 00:06:59.783 START TEST bdev_nbd 00:06:59.783 ************************************ 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:59.783 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61320 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61320 /var/tmp/spdk-nbd.sock 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61320 ']' 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.784 11:21:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:59.784 [2024-10-27 11:21:44.822054] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:06:59.784 [2024-10-27 11:21:44.822170] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.784 [2024-10-27 11:21:44.979951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.042 [2024-10-27 11:21:45.075500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.608 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.866 1+0 records in 00:07:00.866 1+0 records out 00:07:00.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078195 s, 5.2 MB/s 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:00.866 11:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:00.866 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:00.866 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:00.866 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:00.866 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:00.866 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.866 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.866 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.866 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.867 1+0 records in 00:07:00.867 1+0 records out 00:07:00.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620208 s, 6.6 MB/s 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:00.867 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.125 1+0 records in 00:07:01.125 1+0 records out 00:07:01.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000938844 s, 4.4 MB/s 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.125 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.383 1+0 records in 00:07:01.383 1+0 records out 00:07:01.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101732 s, 4.0 MB/s 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.383 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.642 1+0 records in 00:07:01.642 1+0 records out 00:07:01.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000990676 s, 4.1 MB/s 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.642 11:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.900 1+0 records in 00:07:01.900 1+0 records out 00:07:01.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000968671 s, 4.2 MB/s 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.900 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.158 1+0 records in 00:07:02.158 1+0 records out 00:07:02.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119411 s, 3.4 MB/s 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:02.158 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.416 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd0", 00:07:02.417 "bdev_name": "Nvme0n1" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd1", 00:07:02.417 "bdev_name": "Nvme1n1p1" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd2", 00:07:02.417 "bdev_name": "Nvme1n1p2" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd3", 00:07:02.417 "bdev_name": "Nvme2n1" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd4", 00:07:02.417 "bdev_name": "Nvme2n2" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd5", 00:07:02.417 "bdev_name": "Nvme2n3" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd6", 00:07:02.417 "bdev_name": "Nvme3n1" 00:07:02.417 } 00:07:02.417 ]' 00:07:02.417 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:02.417 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd0", 00:07:02.417 "bdev_name": "Nvme0n1" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd1", 00:07:02.417 "bdev_name": "Nvme1n1p1" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd2", 00:07:02.417 "bdev_name": "Nvme1n1p2" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd3", 00:07:02.417 "bdev_name": "Nvme2n1" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd4", 00:07:02.417 "bdev_name": "Nvme2n2" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd5", 00:07:02.417 "bdev_name": "Nvme2n3" 00:07:02.417 }, 00:07:02.417 { 00:07:02.417 "nbd_device": "/dev/nbd6", 00:07:02.417 "bdev_name": "Nvme3n1" 00:07:02.417 } 00:07:02.417 ]' 00:07:02.417 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:02.417 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:02.417 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.417 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:02.417 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.417 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:02.417 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.417 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.675 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.675 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.675 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.675 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.675 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.675 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.675 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.675 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.675 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.675 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.934 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.934 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.934 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.934 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.934 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.934 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.934 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.934 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.934 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.934 11:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:02.934 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:02.934 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:02.934 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:02.934 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.934 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.934 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:02.934 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.934 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.934 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.934 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:03.192 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:03.192 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:03.192 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:03.192 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.192 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.192 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:03.192 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.192 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.192 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.192 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:03.450 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:03.450 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:03.450 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:03.450 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.450 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.450 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:03.450 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.450 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.450 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.450 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:03.708 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:03.708 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:03.708 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:03.708 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.708 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.708 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:03.708 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.708 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.708 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.708 11:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.966 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:04.224 /dev/nbd0 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.224 1+0 records in 00:07:04.224 1+0 records out 00:07:04.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458238 s, 8.9 MB/s 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.224 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:04.606 /dev/nbd1 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.606 1+0 records in 00:07:04.606 1+0 records out 00:07:04.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345339 s, 11.9 MB/s 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.606 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:04.896 /dev/nbd10 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.896 1+0 records in 00:07:04.896 1+0 records out 00:07:04.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496296 s, 8.3 MB/s 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.896 11:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:04.896 /dev/nbd11 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.896 1+0 records in 00:07:04.896 1+0 records out 00:07:04.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047982 s, 8.5 MB/s 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.896 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:05.154 /dev/nbd12 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.154 1+0 records in 00:07:05.154 1+0 records out 00:07:05.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427958 s, 9.6 MB/s 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:05.154 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:05.412 /dev/nbd13 00:07:05.412 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:05.412 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:05.412 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:05.412 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.412 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.412 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.412 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:05.412 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.412 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.412 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.412 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.412 1+0 records in 00:07:05.412 1+0 records out 00:07:05.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440421 s, 9.3 MB/s 00:07:05.413 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.413 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.413 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.413 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.413 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.413 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.413 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:05.413 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:05.670 /dev/nbd14 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.670 1+0 records in 00:07:05.670 1+0 records out 00:07:05.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546819 s, 7.5 MB/s 00:07:05.670 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.671 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.671 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.671 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.671 11:21:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.671 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.671 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:05.671 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.671 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.671 11:21:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.929 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd0", 00:07:05.929 "bdev_name": "Nvme0n1" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd1", 00:07:05.929 "bdev_name": "Nvme1n1p1" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd10", 00:07:05.929 "bdev_name": "Nvme1n1p2" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd11", 00:07:05.929 "bdev_name": "Nvme2n1" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd12", 00:07:05.929 "bdev_name": "Nvme2n2" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd13", 00:07:05.929 "bdev_name": "Nvme2n3" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd14", 00:07:05.929 "bdev_name": "Nvme3n1" 00:07:05.929 } 00:07:05.929 ]' 00:07:05.929 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd0", 00:07:05.929 "bdev_name": "Nvme0n1" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd1", 00:07:05.929 "bdev_name": "Nvme1n1p1" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd10", 00:07:05.929 "bdev_name": "Nvme1n1p2" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd11", 00:07:05.929 "bdev_name": "Nvme2n1" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd12", 00:07:05.929 "bdev_name": "Nvme2n2" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd13", 00:07:05.929 "bdev_name": "Nvme2n3" 00:07:05.929 }, 00:07:05.929 { 00:07:05.929 "nbd_device": "/dev/nbd14", 00:07:05.929 "bdev_name": "Nvme3n1" 00:07:05.929 } 00:07:05.929 ]' 00:07:05.929 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.929 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.929 /dev/nbd1 00:07:05.929 /dev/nbd10 00:07:05.929 /dev/nbd11 00:07:05.929 /dev/nbd12 00:07:05.929 /dev/nbd13 00:07:05.930 /dev/nbd14' 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.930 /dev/nbd1 00:07:05.930 /dev/nbd10 00:07:05.930 /dev/nbd11 00:07:05.930 /dev/nbd12 00:07:05.930 /dev/nbd13 00:07:05.930 /dev/nbd14' 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:05.930 256+0 records in 00:07:05.930 256+0 records out 00:07:05.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0067447 s, 155 MB/s 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.930 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.188 256+0 records in 00:07:06.188 256+0 records out 00:07:06.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0633959 s, 16.5 MB/s 00:07:06.188 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.188 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.188 256+0 records in 00:07:06.188 256+0 records out 00:07:06.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0640137 s, 16.4 MB/s 00:07:06.188 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.188 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:06.188 256+0 records in 00:07:06.188 256+0 records out 00:07:06.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0642693 s, 16.3 MB/s 00:07:06.188 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.188 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:06.188 256+0 records in 00:07:06.188 256+0 records out 00:07:06.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0657141 s, 16.0 MB/s 00:07:06.188 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.188 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:06.446 256+0 records in 00:07:06.446 256+0 records out 00:07:06.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0632122 s, 16.6 MB/s 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:06.446 256+0 records in 00:07:06.446 256+0 records out 00:07:06.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0632594 s, 16.6 MB/s 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:06.446 256+0 records in 00:07:06.446 256+0 records out 00:07:06.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0669978 s, 15.7 MB/s 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.446 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.704 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.704 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.704 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.704 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.704 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.705 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.705 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.705 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.705 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.705 11:21:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.962 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.962 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.962 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.962 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.962 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.962 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.962 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.962 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.962 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.962 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.220 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:07.478 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:07.478 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:07.478 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:07.478 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.478 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.478 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:07.478 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.478 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.478 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.478 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:07.735 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:07.735 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:07.735 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:07.735 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.735 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.735 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:07.735 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.735 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.735 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.735 11:21:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:07.993 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:07.993 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:07.993 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:07.993 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.993 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.993 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:07.993 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.993 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.993 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.993 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.993 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:08.251 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:08.512 malloc_lvol_verify 00:07:08.512 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:08.512 bb7ecba8-59d2-4483-b818-a26247062c54 00:07:08.512 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:08.772 af19004e-5a12-4792-aef7-d18faf3cf1b4 00:07:08.772 11:21:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:09.030 /dev/nbd0 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:09.030 mke2fs 1.47.0 (5-Feb-2023) 00:07:09.030 Discarding device blocks: 0/4096 done 00:07:09.030 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:09.030 00:07:09.030 Allocating group tables: 0/1 done 00:07:09.030 Writing inode tables: 0/1 done 00:07:09.030 Creating journal (1024 blocks): done 00:07:09.030 Writing superblocks and filesystem accounting information: 0/1 done 00:07:09.030 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.030 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61320 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61320 ']' 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61320 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61320 00:07:09.288 killing process with pid 61320 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61320' 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61320 00:07:09.288 11:21:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61320 00:07:09.855 ************************************ 00:07:09.855 END TEST bdev_nbd 00:07:09.855 ************************************ 00:07:09.855 11:21:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:09.855 00:07:09.855 real 0m10.259s 00:07:09.855 user 0m14.813s 00:07:09.855 sys 0m3.372s 00:07:09.855 11:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.855 11:21:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:09.855 11:21:55 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:09.855 11:21:55 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:09.855 11:21:55 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:09.855 skipping fio tests on NVMe due to multi-ns failures. 00:07:09.855 11:21:55 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:09.855 11:21:55 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:09.855 11:21:55 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.855 11:21:55 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:09.855 11:21:55 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.855 11:21:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.855 ************************************ 00:07:09.855 START TEST bdev_verify 00:07:09.855 ************************************ 00:07:09.855 11:21:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.855 [2024-10-27 11:21:55.128245] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:07:09.855 [2024-10-27 11:21:55.128368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61732 ] 00:07:10.112 [2024-10-27 11:21:55.285055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.112 [2024-10-27 11:21:55.359835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.113 [2024-10-27 11:21:55.359960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.678 Running I/O for 5 seconds... 00:07:12.989 20928.00 IOPS, 81.75 MiB/s [2024-10-27T11:21:59.211Z] 20128.00 IOPS, 78.62 MiB/s [2024-10-27T11:22:00.152Z] 19925.33 IOPS, 77.83 MiB/s [2024-10-27T11:22:01.095Z] 19888.00 IOPS, 77.69 MiB/s [2024-10-27T11:22:01.095Z] 20160.00 IOPS, 78.75 MiB/s 00:07:15.814 Latency(us) 00:07:15.814 [2024-10-27T11:22:01.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.814 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x0 length 0xbd0bd 00:07:15.815 Nvme0n1 : 5.05 1419.01 5.54 0.00 0.00 89871.87 19055.85 81466.29 00:07:15.815 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:15.815 Nvme0n1 : 5.06 1417.06 5.54 0.00 0.00 90070.72 20971.52 79449.80 00:07:15.815 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x0 length 0x4ff80 00:07:15.815 Nvme1n1p1 : 5.05 1418.55 5.54 0.00 0.00 89777.32 21778.12 75820.11 00:07:15.815 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:15.815 Nvme1n1p1 : 5.06 1416.62 5.53 0.00 0.00 89944.50 22786.36 77836.60 00:07:15.815 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x0 length 0x4ff7f 00:07:15.815 Nvme1n1p2 : 5.08 1424.67 5.57 0.00 0.00 89280.06 9326.28 72997.02 00:07:15.815 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:15.815 Nvme1n1p2 : 5.06 1416.17 5.53 0.00 0.00 89799.60 22080.59 77030.01 00:07:15.815 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x0 length 0x80000 00:07:15.815 Nvme2n1 : 5.08 1423.62 5.56 0.00 0.00 89214.20 11998.13 68964.04 00:07:15.815 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x80000 length 0x80000 00:07:15.815 Nvme2n1 : 5.06 1415.78 5.53 0.00 0.00 89662.85 21374.82 70980.53 00:07:15.815 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x0 length 0x80000 00:07:15.815 Nvme2n2 : 5.08 1422.87 5.56 0.00 0.00 88990.51 13107.20 71383.83 00:07:15.815 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x80000 length 0x80000 00:07:15.815 Nvme2n2 : 5.06 1415.38 5.53 0.00 0.00 89520.01 21677.29 73803.62 00:07:15.815 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x0 length 0x80000 00:07:15.815 Nvme2n3 : 5.11 1428.49 5.58 0.00 0.00 88614.42 21979.77 74206.92 00:07:15.815 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x80000 length 0x80000 00:07:15.815 Nvme2n3 : 5.08 1424.76 5.57 0.00 0.00 88758.49 3402.83 76223.41 00:07:15.815 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x0 length 0x20000 00:07:15.815 Nvme3n1 : 5.11 1428.11 5.58 0.00 0.00 88501.31 17442.66 76223.41 00:07:15.815 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.815 Verification LBA range: start 0x20000 length 0x20000 00:07:15.815 Nvme3n1 : 5.09 1432.77 5.60 0.00 0.00 88194.75 10485.76 79853.10 00:07:15.815 [2024-10-27T11:22:01.096Z] =================================================================================================================== 00:07:15.815 [2024-10-27T11:22:01.096Z] Total : 19903.83 77.75 0.00 0.00 89296.16 3402.83 81466.29 00:07:17.201 00:07:17.201 real 0m7.270s 00:07:17.201 user 0m13.612s 00:07:17.201 sys 0m0.217s 00:07:17.201 11:22:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.201 11:22:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:17.201 ************************************ 00:07:17.201 END TEST bdev_verify 00:07:17.201 ************************************ 00:07:17.201 11:22:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:17.201 11:22:02 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:17.201 11:22:02 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.201 11:22:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:17.201 ************************************ 00:07:17.201 START TEST bdev_verify_big_io 00:07:17.201 ************************************ 00:07:17.201 11:22:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:17.462 [2024-10-27 11:22:02.483255] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:07:17.462 [2024-10-27 11:22:02.483418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61830 ] 00:07:17.462 [2024-10-27 11:22:02.654788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.724 [2024-10-27 11:22:02.780403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.724 [2024-10-27 11:22:02.780425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.296 Running I/O for 5 seconds... 00:07:22.800 528.00 IOPS, 33.00 MiB/s [2024-10-27T11:22:09.490Z] 1591.00 IOPS, 99.44 MiB/s [2024-10-27T11:22:10.062Z] 1917.33 IOPS, 119.83 MiB/s 00:07:24.781 Latency(us) 00:07:24.781 [2024-10-27T11:22:10.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.781 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x0 length 0xbd0b 00:07:24.781 Nvme0n1 : 5.69 102.72 6.42 0.00 0.00 1186911.52 27827.59 1355082.83 00:07:24.781 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:24.781 Nvme0n1 : 5.92 108.16 6.76 0.00 0.00 1113861.83 14317.10 1380893.93 00:07:24.781 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x0 length 0x4ff8 00:07:24.781 Nvme1n1p1 : 5.97 83.03 5.19 0.00 0.00 1410265.61 103244.41 2051982.57 00:07:24.781 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:24.781 Nvme1n1p1 : 5.80 100.79 6.30 0.00 0.00 1162103.48 105664.20 1780966.01 00:07:24.781 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x0 length 0x4ff7 00:07:24.781 Nvme1n1p2 : 6.02 104.85 6.55 0.00 0.00 1098870.71 131475.30 1832588.21 00:07:24.781 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:24.781 Nvme1n1p2 : 6.06 110.04 6.88 0.00 0.00 1045352.33 89935.56 1793871.56 00:07:24.781 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x0 length 0x8000 00:07:24.781 Nvme2n1 : 6.02 116.61 7.29 0.00 0.00 962234.80 42346.34 1129235.69 00:07:24.781 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x8000 length 0x8000 00:07:24.781 Nvme2n1 : 6.06 108.87 6.80 0.00 0.00 1011677.86 89935.56 1819682.66 00:07:24.781 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x0 length 0x8000 00:07:24.781 Nvme2n2 : 6.09 121.69 7.61 0.00 0.00 890672.27 29037.49 1155046.79 00:07:24.781 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x8000 length 0x8000 00:07:24.781 Nvme2n2 : 6.18 116.87 7.30 0.00 0.00 914732.02 80659.69 1832588.21 00:07:24.781 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x0 length 0x8000 00:07:24.781 Nvme2n3 : 6.09 126.09 7.88 0.00 0.00 832728.48 38716.65 1193763.45 00:07:24.781 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x8000 length 0x8000 00:07:24.781 Nvme2n3 : 6.20 127.30 7.96 0.00 0.00 817022.56 9427.10 1858399.31 00:07:24.781 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x0 length 0x2000 00:07:24.781 Nvme3n1 : 6.19 148.34 9.27 0.00 0.00 686365.53 586.04 1219574.55 00:07:24.781 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.781 Verification LBA range: start 0x2000 length 0x2000 00:07:24.781 Nvme3n1 : 6.26 172.11 10.76 0.00 0.00 585019.91 548.23 1884210.41 00:07:24.781 [2024-10-27T11:22:10.062Z] =================================================================================================================== 00:07:24.781 [2024-10-27T11:22:10.063Z] Total : 1647.46 102.97 0.00 0.00 940273.02 548.23 2051982.57 00:07:26.693 00:07:26.693 real 0m9.130s 00:07:26.693 user 0m17.226s 00:07:26.693 sys 0m0.322s 00:07:26.693 11:22:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.693 ************************************ 00:07:26.693 END TEST bdev_verify_big_io 00:07:26.693 ************************************ 00:07:26.693 11:22:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:26.693 11:22:11 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.693 11:22:11 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:26.693 11:22:11 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.693 11:22:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.693 ************************************ 00:07:26.693 START TEST bdev_write_zeroes 00:07:26.693 ************************************ 00:07:26.693 11:22:11 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.693 [2024-10-27 11:22:11.668518] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:07:26.693 [2024-10-27 11:22:11.668633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61939 ] 00:07:26.694 [2024-10-27 11:22:11.824333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.694 [2024-10-27 11:22:11.902066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.265 Running I/O for 1 seconds... 00:07:28.205 52850.00 IOPS, 206.45 MiB/s 00:07:28.205 Latency(us) 00:07:28.205 [2024-10-27T11:22:13.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.205 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.205 Nvme0n1 : 1.03 7539.32 29.45 0.00 0.00 16937.59 6074.68 219394.36 00:07:28.205 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.205 Nvme1n1p1 : 1.03 7543.49 29.47 0.00 0.00 16906.77 11040.30 212941.59 00:07:28.205 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.205 Nvme1n1p2 : 1.03 7533.91 29.43 0.00 0.00 16838.80 8721.33 209715.20 00:07:28.205 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.205 Nvme2n1 : 1.03 7525.34 29.40 0.00 0.00 16834.04 8519.68 209715.20 00:07:28.205 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.205 Nvme2n2 : 1.03 7516.76 29.36 0.00 0.00 16831.61 8318.03 209715.20 00:07:28.205 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.205 Nvme2n3 : 1.03 7508.25 29.33 0.00 0.00 16818.74 8015.56 211328.39 00:07:28.205 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:28.205 Nvme3n1 : 1.03 7499.65 29.30 0.00 0.00 16804.25 10687.41 211328.39 00:07:28.205 [2024-10-27T11:22:13.486Z] =================================================================================================================== 00:07:28.205 [2024-10-27T11:22:13.486Z] Total : 52666.71 205.73 0.00 0.00 16853.09 6074.68 219394.36 00:07:29.144 00:07:29.144 real 0m2.693s 00:07:29.144 user 0m2.403s 00:07:29.144 sys 0m0.174s 00:07:29.144 11:22:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.144 ************************************ 00:07:29.144 END TEST bdev_write_zeroes 00:07:29.144 ************************************ 00:07:29.144 11:22:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:29.144 11:22:14 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.144 11:22:14 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:29.144 11:22:14 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.144 11:22:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.144 ************************************ 00:07:29.144 START TEST bdev_json_nonenclosed 00:07:29.144 ************************************ 00:07:29.144 11:22:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.403 [2024-10-27 11:22:14.439334] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:07:29.403 [2024-10-27 11:22:14.439480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61992 ] 00:07:29.403 [2024-10-27 11:22:14.605690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.663 [2024-10-27 11:22:14.724860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.663 [2024-10-27 11:22:14.724963] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:29.663 [2024-10-27 11:22:14.724982] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:29.663 [2024-10-27 11:22:14.724992] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.663 00:07:29.663 real 0m0.547s 00:07:29.663 user 0m0.319s 00:07:29.663 sys 0m0.122s 00:07:29.663 11:22:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.663 ************************************ 00:07:29.663 END TEST bdev_json_nonenclosed 00:07:29.663 ************************************ 00:07:29.663 11:22:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:29.924 11:22:14 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.924 11:22:14 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:29.924 11:22:14 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.924 11:22:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.924 ************************************ 00:07:29.924 START TEST bdev_json_nonarray 00:07:29.924 ************************************ 00:07:29.924 11:22:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.924 [2024-10-27 11:22:15.048943] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:07:29.924 [2024-10-27 11:22:15.049090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62012 ] 00:07:30.185 [2024-10-27 11:22:15.210733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.185 [2024-10-27 11:22:15.331823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.185 [2024-10-27 11:22:15.331930] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:30.185 [2024-10-27 11:22:15.331950] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:30.185 [2024-10-27 11:22:15.331960] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.447 00:07:30.447 real 0m0.545s 00:07:30.447 user 0m0.332s 00:07:30.447 sys 0m0.107s 00:07:30.447 ************************************ 00:07:30.447 END TEST bdev_json_nonarray 00:07:30.447 ************************************ 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:30.447 11:22:15 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:30.447 11:22:15 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:30.447 11:22:15 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:30.447 11:22:15 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.447 11:22:15 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.447 11:22:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.447 ************************************ 00:07:30.447 START TEST bdev_gpt_uuid 00:07:30.447 ************************************ 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62043 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62043 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62043 ']' 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:30.447 11:22:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:30.447 [2024-10-27 11:22:15.724432] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:07:30.447 [2024-10-27 11:22:15.724608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62043 ] 00:07:30.709 [2024-10-27 11:22:15.886050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.970 [2024-10-27 11:22:16.006189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.542 11:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.542 11:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:31.542 11:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:31.542 11:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.542 11:22:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:31.803 Some configs were skipped because the RPC state that can call them passed over. 00:07:31.804 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.804 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:31.804 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.804 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:31.804 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.804 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:31.804 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.804 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:31.804 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.804 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:31.804 { 00:07:31.804 "name": "Nvme1n1p1", 00:07:31.804 "aliases": [ 00:07:31.804 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:31.804 ], 00:07:31.804 "product_name": "GPT Disk", 00:07:31.804 "block_size": 4096, 00:07:31.804 "num_blocks": 655104, 00:07:31.804 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:31.804 "assigned_rate_limits": { 00:07:31.804 "rw_ios_per_sec": 0, 00:07:31.804 "rw_mbytes_per_sec": 0, 00:07:31.804 "r_mbytes_per_sec": 0, 00:07:31.804 "w_mbytes_per_sec": 0 00:07:31.804 }, 00:07:31.804 "claimed": false, 00:07:31.804 "zoned": false, 00:07:31.804 "supported_io_types": { 00:07:31.804 "read": true, 00:07:31.804 "write": true, 00:07:31.804 "unmap": true, 00:07:31.804 "flush": true, 00:07:31.804 "reset": true, 00:07:31.804 "nvme_admin": false, 00:07:31.804 "nvme_io": false, 00:07:31.804 "nvme_io_md": false, 00:07:31.804 "write_zeroes": true, 00:07:31.804 "zcopy": false, 00:07:31.804 "get_zone_info": false, 00:07:31.804 "zone_management": false, 00:07:31.804 "zone_append": false, 00:07:31.804 "compare": true, 00:07:31.804 "compare_and_write": false, 00:07:31.804 "abort": true, 00:07:31.804 "seek_hole": false, 00:07:31.804 "seek_data": false, 00:07:31.804 "copy": true, 00:07:31.804 "nvme_iov_md": false 00:07:31.804 }, 00:07:31.804 "driver_specific": { 00:07:31.804 "gpt": { 00:07:31.804 "base_bdev": "Nvme1n1", 00:07:31.804 "offset_blocks": 256, 00:07:31.804 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:31.804 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:31.804 "partition_name": "SPDK_TEST_first" 00:07:31.804 } 00:07:31.804 } 00:07:31.804 } 00:07:31.804 ]' 00:07:31.804 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:32.065 { 00:07:32.065 "name": "Nvme1n1p2", 00:07:32.065 "aliases": [ 00:07:32.065 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:32.065 ], 00:07:32.065 "product_name": "GPT Disk", 00:07:32.065 "block_size": 4096, 00:07:32.065 "num_blocks": 655103, 00:07:32.065 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:32.065 "assigned_rate_limits": { 00:07:32.065 "rw_ios_per_sec": 0, 00:07:32.065 "rw_mbytes_per_sec": 0, 00:07:32.065 "r_mbytes_per_sec": 0, 00:07:32.065 "w_mbytes_per_sec": 0 00:07:32.065 }, 00:07:32.065 "claimed": false, 00:07:32.065 "zoned": false, 00:07:32.065 "supported_io_types": { 00:07:32.065 "read": true, 00:07:32.065 "write": true, 00:07:32.065 "unmap": true, 00:07:32.065 "flush": true, 00:07:32.065 "reset": true, 00:07:32.065 "nvme_admin": false, 00:07:32.065 "nvme_io": false, 00:07:32.065 "nvme_io_md": false, 00:07:32.065 "write_zeroes": true, 00:07:32.065 "zcopy": false, 00:07:32.065 "get_zone_info": false, 00:07:32.065 "zone_management": false, 00:07:32.065 "zone_append": false, 00:07:32.065 "compare": true, 00:07:32.065 "compare_and_write": false, 00:07:32.065 "abort": true, 00:07:32.065 "seek_hole": false, 00:07:32.065 "seek_data": false, 00:07:32.065 "copy": true, 00:07:32.065 "nvme_iov_md": false 00:07:32.065 }, 00:07:32.065 "driver_specific": { 00:07:32.065 "gpt": { 00:07:32.065 "base_bdev": "Nvme1n1", 00:07:32.065 "offset_blocks": 655360, 00:07:32.065 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:32.065 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:32.065 "partition_name": "SPDK_TEST_second" 00:07:32.065 } 00:07:32.065 } 00:07:32.065 } 00:07:32.065 ]' 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62043 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62043 ']' 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62043 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62043 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.065 killing process with pid 62043 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62043' 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62043 00:07:32.065 11:22:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62043 00:07:33.977 00:07:33.977 real 0m3.287s 00:07:33.977 user 0m3.319s 00:07:33.977 sys 0m0.487s 00:07:33.977 11:22:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.977 11:22:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:33.977 ************************************ 00:07:33.977 END TEST bdev_gpt_uuid 00:07:33.977 ************************************ 00:07:33.977 11:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:33.977 11:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:33.977 11:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:33.977 11:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:33.978 11:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:33.978 11:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:33.978 11:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:33.978 11:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:33.978 11:22:18 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:34.239 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:34.239 Waiting for block devices as requested 00:07:34.239 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.500 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.500 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.500 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.786 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:39.786 11:22:24 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:39.786 11:22:24 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:40.043 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:40.043 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:40.043 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:40.043 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:40.043 11:22:25 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:40.043 00:07:40.043 real 0m56.102s 00:07:40.043 user 1m11.850s 00:07:40.043 sys 0m7.853s 00:07:40.043 11:22:25 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.043 ************************************ 00:07:40.043 11:22:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.043 END TEST blockdev_nvme_gpt 00:07:40.043 ************************************ 00:07:40.043 11:22:25 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:40.043 11:22:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.043 11:22:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.043 11:22:25 -- common/autotest_common.sh@10 -- # set +x 00:07:40.043 ************************************ 00:07:40.043 START TEST nvme 00:07:40.043 ************************************ 00:07:40.043 11:22:25 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:40.043 * Looking for test storage... 00:07:40.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:40.043 11:22:25 nvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:40.043 11:22:25 nvme -- common/autotest_common.sh@1689 -- # lcov --version 00:07:40.043 11:22:25 nvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:40.043 11:22:25 nvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:40.043 11:22:25 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.043 11:22:25 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.043 11:22:25 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.043 11:22:25 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.043 11:22:25 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.043 11:22:25 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.043 11:22:25 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.043 11:22:25 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.043 11:22:25 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.043 11:22:25 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.043 11:22:25 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.043 11:22:25 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:40.043 11:22:25 nvme -- scripts/common.sh@345 -- # : 1 00:07:40.043 11:22:25 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.043 11:22:25 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.043 11:22:25 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:40.043 11:22:25 nvme -- scripts/common.sh@353 -- # local d=1 00:07:40.043 11:22:25 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.043 11:22:25 nvme -- scripts/common.sh@355 -- # echo 1 00:07:40.043 11:22:25 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.043 11:22:25 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:40.043 11:22:25 nvme -- scripts/common.sh@353 -- # local d=2 00:07:40.043 11:22:25 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.043 11:22:25 nvme -- scripts/common.sh@355 -- # echo 2 00:07:40.043 11:22:25 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.043 11:22:25 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.043 11:22:25 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.043 11:22:25 nvme -- scripts/common.sh@368 -- # return 0 00:07:40.043 11:22:25 nvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.043 11:22:25 nvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:40.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.043 --rc genhtml_branch_coverage=1 00:07:40.043 --rc genhtml_function_coverage=1 00:07:40.043 --rc genhtml_legend=1 00:07:40.043 --rc geninfo_all_blocks=1 00:07:40.043 --rc geninfo_unexecuted_blocks=1 00:07:40.043 00:07:40.043 ' 00:07:40.043 11:22:25 nvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:40.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.043 --rc genhtml_branch_coverage=1 00:07:40.043 --rc genhtml_function_coverage=1 00:07:40.043 --rc genhtml_legend=1 00:07:40.043 --rc geninfo_all_blocks=1 00:07:40.043 --rc geninfo_unexecuted_blocks=1 00:07:40.043 00:07:40.043 ' 00:07:40.043 11:22:25 nvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:40.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.043 --rc genhtml_branch_coverage=1 00:07:40.043 --rc genhtml_function_coverage=1 00:07:40.043 --rc genhtml_legend=1 00:07:40.043 --rc geninfo_all_blocks=1 00:07:40.043 --rc geninfo_unexecuted_blocks=1 00:07:40.044 00:07:40.044 ' 00:07:40.044 11:22:25 nvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:40.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.044 --rc genhtml_branch_coverage=1 00:07:40.044 --rc genhtml_function_coverage=1 00:07:40.044 --rc genhtml_legend=1 00:07:40.044 --rc geninfo_all_blocks=1 00:07:40.044 --rc geninfo_unexecuted_blocks=1 00:07:40.044 00:07:40.044 ' 00:07:40.044 11:22:25 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:40.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:40.868 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.868 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.868 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.868 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:41.127 11:22:26 nvme -- nvme/nvme.sh@79 -- # uname 00:07:41.127 11:22:26 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:41.127 11:22:26 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:41.127 11:22:26 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:41.127 11:22:26 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:41.127 11:22:26 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:41.127 11:22:26 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:41.127 Waiting for stub to ready for secondary processes... 00:07:41.127 11:22:26 nvme -- common/autotest_common.sh@1071 -- # stubpid=62678 00:07:41.127 11:22:26 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:41.127 11:22:26 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:41.127 11:22:26 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:41.127 11:22:26 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/62678 ]] 00:07:41.127 11:22:26 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:41.127 [2024-10-27 11:22:26.221033] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:07:41.127 [2024-10-27 11:22:26.221258] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:41.696 [2024-10-27 11:22:26.965913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.957 [2024-10-27 11:22:27.061104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.957 [2024-10-27 11:22:27.061364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.957 [2024-10-27 11:22:27.061371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.957 [2024-10-27 11:22:27.076269] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:41.957 [2024-10-27 11:22:27.076430] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:41.957 [2024-10-27 11:22:27.088235] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:41.957 [2024-10-27 11:22:27.088651] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:41.957 [2024-10-27 11:22:27.091414] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:41.957 [2024-10-27 11:22:27.091678] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:41.957 [2024-10-27 11:22:27.091850] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:41.957 [2024-10-27 11:22:27.094237] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:41.957 [2024-10-27 11:22:27.094562] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:41.957 [2024-10-27 11:22:27.094683] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:41.957 [2024-10-27 11:22:27.096666] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:41.957 [2024-10-27 11:22:27.096828] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:41.957 [2024-10-27 11:22:27.096905] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:41.957 [2024-10-27 11:22:27.096957] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:41.957 [2024-10-27 11:22:27.097003] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:41.957 11:22:27 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:41.957 11:22:27 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:41.957 done. 00:07:41.957 11:22:27 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:41.957 11:22:27 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:41.957 11:22:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.957 11:22:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.957 ************************************ 00:07:41.957 START TEST nvme_reset 00:07:41.957 ************************************ 00:07:41.957 11:22:27 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:42.218 Initializing NVMe Controllers 00:07:42.218 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:42.218 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:42.218 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:42.218 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:42.218 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:42.218 00:07:42.218 real 0m0.230s 00:07:42.218 user 0m0.073s 00:07:42.218 sys 0m0.103s 00:07:42.218 11:22:27 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.218 11:22:27 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:42.219 ************************************ 00:07:42.219 END TEST nvme_reset 00:07:42.219 ************************************ 00:07:42.219 11:22:27 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:42.219 11:22:27 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.219 11:22:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.219 11:22:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.219 ************************************ 00:07:42.219 START TEST nvme_identify 00:07:42.219 ************************************ 00:07:42.219 11:22:27 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:42.219 11:22:27 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:42.219 11:22:27 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:42.219 11:22:27 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:42.219 11:22:27 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:42.219 11:22:27 nvme.nvme_identify -- common/autotest_common.sh@1494 -- # bdfs=() 00:07:42.219 11:22:27 nvme.nvme_identify -- common/autotest_common.sh@1494 -- # local bdfs 00:07:42.219 11:22:27 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:42.219 11:22:27 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:42.219 11:22:27 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:07:42.484 11:22:27 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:07:42.484 11:22:27 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:42.484 11:22:27 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:42.484 [2024-10-27 11:22:27.711693] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62699 termina===================================================== 00:07:42.484 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:42.484 ===================================================== 00:07:42.484 Controller Capabilities/Features 00:07:42.484 ================================ 00:07:42.484 Vendor ID: 1b36 00:07:42.484 Subsystem Vendor ID: 1af4 00:07:42.485 Serial Number: 12340 00:07:42.485 Model Number: QEMU NVMe Ctrl 00:07:42.485 Firmware Version: 8.0.0 00:07:42.485 Recommended Arb Burst: 6 00:07:42.485 IEEE OUI Identifier: 00 54 52 00:07:42.485 Multi-path I/O 00:07:42.485 May have multiple subsystem ports: No 00:07:42.485 May have multiple controllers: No 00:07:42.485 Associated with SR-IOV VF: No 00:07:42.485 Max Data Transfer Size: 524288 00:07:42.485 Max Number of Namespaces: 256 00:07:42.485 Max Number of I/O Queues: 64 00:07:42.485 NVMe Specification Version (VS): 1.4 00:07:42.485 NVMe Specification Version (Identify): 1.4 00:07:42.485 Maximum Queue Entries: 2048 00:07:42.485 Contiguous Queues Required: Yes 00:07:42.485 Arbitration Mechanisms Supported 00:07:42.485 Weighted Round Robin: Not Supported 00:07:42.485 Vendor Specific: Not Supported 00:07:42.485 Reset Timeout: 7500 ms 00:07:42.485 Doorbell Stride: 4 bytes 00:07:42.485 NVM Subsystem Reset: Not Supported 00:07:42.485 Command Sets Supported 00:07:42.485 NVM Command Set: Supported 00:07:42.485 Boot Partition: Not Supported 00:07:42.485 Memory Page Size Minimum: 4096 bytes 00:07:42.485 Memory Page Size Maximum: 65536 bytes 00:07:42.485 Persistent Memory Region: Not Supported 00:07:42.485 Optional Asynchronous Events Supported 00:07:42.485 Namespace Attribute Notices: Supported 00:07:42.485 Firmware Activation Notices: Not Supported 00:07:42.485 ANA Change Notices: Not Supported 00:07:42.485 PLE Aggregate Log Change Notices: Not Supported 00:07:42.485 LBA Status Info Alert Notices: Not Supported 00:07:42.485 EGE Aggregate Log Change Notices: Not Supported 00:07:42.485 Normal NVM Subsystem Shutdown event: Not Supported 00:07:42.485 Zone Descriptor Change Notices: Not Supported 00:07:42.485 Discovery Log Change Notices: Not Supported 00:07:42.485 Controller Attributes 00:07:42.485 128-bit Host Identifier: Not Supported 00:07:42.485 Non-Operational Permissive Mode: Not Supported 00:07:42.485 NVM Sets: Not Supported 00:07:42.485 Read Recovery Levels: Not Supported 00:07:42.485 Endurance Groups: Not Supported 00:07:42.485 Predictable Latency Mode: Not Supported 00:07:42.485 Traffic Based Keep ALive: Not Supported 00:07:42.485 Namespace Granularity: Not Supported 00:07:42.485 SQ Associations: Not Supported 00:07:42.485 UUID List: Not Supported 00:07:42.485 Multi-Domain Subsystem: Not Supported 00:07:42.485 Fixed Capacity Management: Not Supported 00:07:42.485 Variable Capacity Management: Not Supported 00:07:42.485 Delete Endurance Group: Not Supported 00:07:42.485 Delete NVM Set: Not Supported 00:07:42.485 Extended LBA Formats Supported: Supported 00:07:42.485 Flexible Data Placement Supported: Not Supported 00:07:42.485 00:07:42.485 Controller Memory Buffer Support 00:07:42.485 ================================ 00:07:42.485 Supported: No 00:07:42.485 00:07:42.485 Persistent Memory Region Support 00:07:42.485 ================================ 00:07:42.485 Supported: No 00:07:42.485 00:07:42.485 Admin Command Set Attributes 00:07:42.485 ============================ 00:07:42.485 Security Send/Receive: Not Supported 00:07:42.485 Format NVM: Supported 00:07:42.485 Firmware Activate/Download: Not Supported 00:07:42.485 Namespace Management: Supported 00:07:42.485 Device Self-Test: Not Supported 00:07:42.485 Directives: Supported 00:07:42.485 NVMe-MI: Not Supported 00:07:42.485 Virtualization Management: Not Supported 00:07:42.485 Doorbell Buffer Config: Supported 00:07:42.485 Get LBA Status Capability: Not Supported 00:07:42.485 Command & Feature Lockdown Capability: Not Supported 00:07:42.485 Abort Command Limit: 4 00:07:42.485 Async Event Request Limit: 4 00:07:42.485 Number of Firmware Slots: N/A 00:07:42.485 Firmware Slot 1 Read-Only: N/A 00:07:42.485 Firmware Activation Without Reset: N/A 00:07:42.485 Multiple Update Detection Support: N/A 00:07:42.485 Firmware Update Granularity: No Information Provided 00:07:42.485 Per-Namespace SMART Log: Yes 00:07:42.485 Asymmetric Namespace Access Log Page: Not Supported 00:07:42.485 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:42.485 Command Effects Log Page: Supported 00:07:42.485 Get Log Page Extended Data: Supported 00:07:42.485 Telemetry Log Pages: Not Supported 00:07:42.485 Persistent Event Log Pages: Not Supported 00:07:42.485 Supported Log Pages Log Page: May Support 00:07:42.485 Commands Supported & Effects Log Page: Not Supported 00:07:42.485 Feature Identifiers & Effects Log Page:May Support 00:07:42.485 NVMe-MI Commands & Effects Log Page: May Support 00:07:42.485 Data Area 4 for Telemetry Log: Not Supported 00:07:42.485 Error Log Page Entries Supported: 1 00:07:42.485 Keep Alive: Not Supported 00:07:42.485 00:07:42.485 NVM Command Set Attributes 00:07:42.485 ========================== 00:07:42.485 Submission Queue Entry Size 00:07:42.485 Max: 64 00:07:42.485 Min: 64 00:07:42.485 Completion Queue Entry Size 00:07:42.485 Max: 16 00:07:42.485 Min: 16 00:07:42.485 Number of Namespaces: 256 00:07:42.485 Compare Command: Supported 00:07:42.485 Write Uncorrectable Command: Not Supported 00:07:42.485 Dataset Management Command: Supported 00:07:42.485 Write Zeroes Command: Supported 00:07:42.485 Set Features Save Field: Supported 00:07:42.485 Reservations: Not Supported 00:07:42.485 Timestamp: Supported 00:07:42.485 Copy: Supported 00:07:42.485 Volatile Write Cache: Present 00:07:42.485 Atomic Write Unit (Normal): 1 00:07:42.485 Atomic Write Unit (PFail): 1 00:07:42.485 Atomic Compare & Write Unit: 1 00:07:42.485 Fused Compare & Write: Not Supported 00:07:42.485 Scatter-Gather List 00:07:42.485 SGL Command Set: Supported 00:07:42.485 SGL Keyed: Not Supported 00:07:42.485 SGL Bit Bucket Descriptor: Not Supported 00:07:42.485 SGL Metadata Pointer: Not Supported 00:07:42.485 Oversized SGL: Not Supported 00:07:42.485 SGL Metadata Address: Not Supported 00:07:42.485 SGL Offset: Not Supported 00:07:42.485 Transport SGL Data Block: Not Supported 00:07:42.485 Replay Protected Memory Block: Not Supported 00:07:42.485 00:07:42.485 Firmware Slot Information 00:07:42.485 ========================= 00:07:42.485 Active slot: 1 00:07:42.485 Slot 1 Firmware Revision: 1.0 00:07:42.485 00:07:42.485 00:07:42.485 Commands Supported and Effects 00:07:42.485 ============================== 00:07:42.485 Admin Commands 00:07:42.485 -------------- 00:07:42.485 Delete I/O Submission Queue (00h): Supported 00:07:42.485 Create I/O Submission Queue (01h): Supported 00:07:42.485 Get Log Page (02h): Supported 00:07:42.485 Delete I/O Completion Queue (04h): Supported 00:07:42.485 Create I/O Completion Queue (05h): Supported 00:07:42.485 Identify (06h): Supported 00:07:42.485 Abort (08h): Supported 00:07:42.485 Set Features (09h): Supported 00:07:42.486 Get Features (0Ah): Supported 00:07:42.486 Asynchronous Event Request (0Ch): Supported 00:07:42.486 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:42.486 Directive Send (19h): Supported 00:07:42.486 Directive Receive (1Ah): Supported 00:07:42.486 Virtualization Management (1Ch): Supported 00:07:42.486 Doorbell Buffer Config (7Ch): Supported 00:07:42.486 Format NVM (80h): Supported LBA-Change 00:07:42.486 I/O Commands 00:07:42.486 ------------ 00:07:42.486 Flush (00h): Supported LBA-Change 00:07:42.486 Write (01h): Supported LBA-Change 00:07:42.486 Read (02h): Supported 00:07:42.486 Compare (05h): Supported 00:07:42.486 Write Zeroes (08h): Supported LBA-Change 00:07:42.486 Dataset Management (09h): Supported LBA-Change 00:07:42.486 Unknown (0Ch): Supported 00:07:42.486 Unknown (12h): Supported 00:07:42.486 Copy (19h): Supported LBA-Change 00:07:42.486 Unknown (1Dh): Supported LBA-Change 00:07:42.486 00:07:42.486 Error Log 00:07:42.486 ========= 00:07:42.486 00:07:42.486 Arbitration 00:07:42.486 =========== 00:07:42.486 Arbitration Burst: no limit 00:07:42.486 00:07:42.486 Power Management 00:07:42.486 ================ 00:07:42.486 Number of Power States: 1 00:07:42.486 Current Power State: Power State #0 00:07:42.486 Power State #0: 00:07:42.486 Max Power: 25.00 W 00:07:42.486 Non-Operational State: Operational 00:07:42.486 Entry Latency: 16 microseconds 00:07:42.486 Exit Latency: 4 microseconds 00:07:42.486 Relative Read Throughput: 0 00:07:42.486 Relative Read Latency: 0 00:07:42.486 Relative Write Throughput: 0 00:07:42.486 Relative Write Latency: 0 00:07:42.486 Idle Power: Not Reported 00:07:42.486 Active Power: Not Reported 00:07:42.486 Non-Operational Permissive Mode: Not Supported 00:07:42.486 00:07:42.486 Health Information 00:07:42.486 ================== 00:07:42.486 Critical Warnings: 00:07:42.486 Available Spare Space: OK 00:07:42.486 Temperature: OK 00:07:42.486 Device Reliability: OK 00:07:42.486 Read Only: No 00:07:42.486 Volatile Memory Backup: OK 00:07:42.486 Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.486 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:42.486 Available Spare: 0% 00:07:42.486 Available Spare Threshold: 0% 00:07:42.486 Life Percentage Used: 0% 00:07:42.486 Data Units Read: 682 00:07:42.486 Data Units Written: 610 00:07:42.486 Host Read Commands: 37207 00:07:42.486 Host Write Commands: 36993 00:07:42.486 Controller Busy Time: 0 minutes 00:07:42.486 Power Cycles: 0 00:07:42.486 Power On Hours: 0 hours 00:07:42.486 Unsafe Shutdowns: 0 00:07:42.486 Unrecoverable Media Errors: 0 00:07:42.486 Lifetime Error Log Entries: 0 00:07:42.486 Warning Temperature Time: 0 minutes 00:07:42.486 Critical Temperature Time: 0 minutes 00:07:42.486 00:07:42.486 Number of Queues 00:07:42.486 ================ 00:07:42.486 Number of I/O Submission Queues: 64 00:07:42.486 Number of I/O Completion Queues: 64 00:07:42.486 00:07:42.486 ZNS Specific Controller Data 00:07:42.486 ============================ 00:07:42.486 Zone Append Size Limit: 0 00:07:42.486 00:07:42.486 00:07:42.486 Active Namespaces 00:07:42.486 ================= 00:07:42.486 Namespace ID:1 00:07:42.486 Error Recovery Timeout: Unlimited 00:07:42.486 Command Set Identifier: NVM (00h) 00:07:42.486 Deallocate: Supported 00:07:42.486 Deallocated/Unwritten Error: Supported 00:07:42.486 Deallocated Read Value: All 0x00 00:07:42.486 Deallocate in Write Zeroes: Not Supported 00:07:42.486 Deallocated Guard Field: 0xFFFF 00:07:42.486 Flush: Supported 00:07:42.486 Reservation: Not Supported 00:07:42.486 Metadata Transferred as: Separate Metadata Buffer 00:07:42.486 Namespace Sharing Capabilities: Private 00:07:42.486 Size (in LBAs): 1548666 (5GiB) 00:07:42.486 Capacity (in LBAs): 1548666 (5GiB) 00:07:42.486 Utilization (in LBAs): 1548666 (5GiB) 00:07:42.486 Thin Provisioning: Not Supported 00:07:42.486 Per-NS Atomic Units: No 00:07:42.486 Maximum Single Source Range Length: 128 00:07:42.486 Maximum Copy Length: 128 00:07:42.486 Maximum Source Range Count: 128 00:07:42.486 NGUID/EUI64 Never Reused: No 00:07:42.486 Namespace Write Protected: No 00:07:42.486 Number of LBA Formats: 8 00:07:42.486 Current LBA Format: LBA Format #07 00:07:42.486 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.486 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.486 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.486 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.486 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.486 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.486 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.486 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.486 00:07:42.486 NVM Specific Namespace Data 00:07:42.486 =========================== 00:07:42.486 Logical Block Storage Tag Mask: 0 00:07:42.486 Protection Information Capabilities: 00:07:42.486 16b Guard Protection Information Storage Tag Support: No 00:07:42.486 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.486 Storage Tag Check Read Support: No 00:07:42.486 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.486 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.486 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.486 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.486 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.486 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.486 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.486 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.486 ===================================================== 00:07:42.486 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:42.486 ===================================================== 00:07:42.486 Controller Capabilities/Features 00:07:42.486 ================================ 00:07:42.486 Vendor ID: 1b36 00:07:42.486 Subsystem Vendor ID: 1af4 00:07:42.486 Serial Number: 12341 00:07:42.486 Model Number: QEMU NVMe Ctrl 00:07:42.486 Firmware Version: 8.0.0 00:07:42.486 Recommended Arb Burst: 6 00:07:42.486 IEEE OUI Identifier: 00 54 52 00:07:42.486 Multi-path I/O 00:07:42.486 May have multiple subsystem ports: No 00:07:42.486 May have multiple controllers: No 00:07:42.487 Associated with SR-IOV VF: No 00:07:42.487 Max Data Transfer Size: 524288 00:07:42.487 Max Number of Namespaces: 256 00:07:42.487 Max Number of I/O Queues: 64 00:07:42.487 NVMe Specification Version (VS): 1.4 00:07:42.487 NVMe Specification Version (Identify): 1.4 00:07:42.487 Maximum Queue Entries: 2048 00:07:42.487 Contiguous Queues Required: Yes 00:07:42.487 Arbitration Mechanisms Supported 00:07:42.487 Weighted Round Robin: Not Supported 00:07:42.487 Vendor Specific: Not Supported 00:07:42.487 Reset Timeout: 7500 ms 00:07:42.487 Doorbell Stride: 4 bytes 00:07:42.487 NVM Subsystem Reset: Not Supported 00:07:42.487 Command Sets Supported 00:07:42.487 NVM Command Set: Supported 00:07:42.487 Boot Partition: Not Supported 00:07:42.487 Memory Page Size Minimum: 4096 bytes 00:07:42.487 Memory Page Size Maximum: 65536 bytes 00:07:42.487 Persistent Memory Region: Not Supported 00:07:42.487 Optional Asynchronous Events Supported 00:07:42.487 Namespace Attribute Notices: Supported 00:07:42.487 Firmware Activation Notices: Not Supported 00:07:42.487 ANA Change Notices: Not Supported 00:07:42.487 PLE Aggregate Log Change Notices: Not Supported 00:07:42.487 LBA Status Info Alert Notices: Not Supported 00:07:42.487 EGE Aggregate Log Change Notices: Not Supported 00:07:42.487 Normal NVM Subsystem Shutdown event: Not Supported 00:07:42.487 Zone Descriptor Change Notices: Not Supported 00:07:42.487 Discovery Log Change Notices: Not Supported 00:07:42.487 Controller Attributes 00:07:42.487 128-bit Host Identifier: Not Supported 00:07:42.487 Non-Operational Permissive Mode: Not Supported 00:07:42.487 NVM Sets: Not Supported 00:07:42.487 Read Recovery Levels: Not Supported 00:07:42.487 Endurance Groups: Not Supported 00:07:42.487 Predictable Latency Mode: Not Supported 00:07:42.487 Traffic Based Keep ALive: Not Supported 00:07:42.487 Namespace Granularity: Not Supported 00:07:42.487 SQ Associations: Not Supported 00:07:42.487 UUID List: Not Supported 00:07:42.487 Multi-Domain Subsystem: Not Supported 00:07:42.487 Fixed Capacity Management: Not Supported 00:07:42.487 Variable Capacity Management: Not Supported 00:07:42.487 Delete Endurance Group: Not Supported 00:07:42.487 Delete NVM Set: Not Supported 00:07:42.487 Extended LBA Formats Supported: Supported 00:07:42.487 Flexible Data Placement Supported: Not Supported 00:07:42.487 00:07:42.487 Controller Memory Buffer Support 00:07:42.487 ================================ 00:07:42.487 Supported: No 00:07:42.487 00:07:42.487 Persistent Memory Region Support 00:07:42.487 ================================ 00:07:42.487 Supported: No 00:07:42.487 00:07:42.487 Admin Command Set Attributes 00:07:42.487 ============================ 00:07:42.487 Security Send/Receive: Not Supported 00:07:42.487 Format NVM: Supported 00:07:42.487 Firmware Activate/Download: Not Supported 00:07:42.487 Namespace Management: Supported 00:07:42.487 Device Self-Test: Not Supported 00:07:42.487 Directives: Supported 00:07:42.487 NVMe-MI: Not Supported 00:07:42.487 Virtualization Management: Not Supported 00:07:42.487 Doorbell Buffer Config: Supported 00:07:42.487 Get LBA Status Capability: Not Supported 00:07:42.487 Command & Feature Lockdown Capability: Not Supported 00:07:42.487 Abort Command Limit: 4 00:07:42.487 Async Event Request Limit: 4 00:07:42.487 Number of Firmware Slots: N/A 00:07:42.487 Firmware Slot 1 Read-Only: N/A 00:07:42.487 Firmware Activation Without Reset: N/A 00:07:42.487 Multiple Update Detection Support: N/A 00:07:42.487 Firmware Update Granularity: No Information Provided 00:07:42.487 Per-Namespace SMART Log: Yes 00:07:42.487 Asymmetric Namespace Access Log Page: Not Supported 00:07:42.487 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:42.487 Command Effects Log Page: Supported 00:07:42.487 Get Log Page Extended Data: Supported 00:07:42.487 Telemetry Log Pages: Not Supported 00:07:42.487 Persistent Event Log Pages: Not Supported 00:07:42.487 Supported Log Pages Log Page: May Support 00:07:42.487 Commands Supported & Effects Log Page: Not Supported 00:07:42.487 Feature Identifiers & Effects Log Page:May Support 00:07:42.487 NVMe-MI Commands & Effects Log Page: May Support 00:07:42.487 Data Area 4 for Telemetry Log: Not Supported 00:07:42.487 Error Log Page Entries Supported: 1 00:07:42.487 Keep Alive: Not Supported 00:07:42.487 00:07:42.487 NVM Command Set Attributes 00:07:42.487 ========================== 00:07:42.487 Submission Queue Entry Size 00:07:42.487 Max: 64 00:07:42.487 Min: 64 00:07:42.487 Completion Queue Entry Size 00:07:42.487 Max: 16 00:07:42.487 Min: 16 00:07:42.487 Number of Namespaces: 256 00:07:42.487 Compare Command: Supported 00:07:42.487 Write Uncorrectable Command: Not Supported 00:07:42.487 Dataset Management Command: Supported 00:07:42.487 Write Zeroes Command: Supported 00:07:42.487 Set Features Save Field: Supported 00:07:42.487 Reservations: Not Supported 00:07:42.487 Timestamp: Supported 00:07:42.487 Copy: Supported 00:07:42.487 Volatile Write Cache: Present 00:07:42.487 Atomic Write Unit (Normal): 1 00:07:42.487 Atomic Write Unit (PFail): 1 00:07:42.487 Atomic Compare & Write Unit: 1 00:07:42.487 Fused Compare & Write: Not Supported 00:07:42.487 Scatter-Gather List 00:07:42.487 SGL Command Set: Supported 00:07:42.487 SGL Keyed: Not Supported 00:07:42.487 SGL Bit Bucket Descriptor: Not Supported 00:07:42.487 SGL Metadata Pointer: Not Supported 00:07:42.487 Oversized SGL: Not Supported 00:07:42.487 SGL Metadata Address: Not Supported 00:07:42.487 SGL Offset: Not Supported 00:07:42.487 Transport SGL Data Block: Not Supported 00:07:42.487 Replay Protected Memory Block: Not Supported 00:07:42.487 00:07:42.487 Firmware Slot Information 00:07:42.487 ========================= 00:07:42.487 Active slot: 1 00:07:42.487 Slot 1 Firmware Revision: 1.0 00:07:42.487 00:07:42.487 00:07:42.487 Commands Supported and Effects 00:07:42.487 ============================== 00:07:42.487 Admin Commands 00:07:42.487 -------------- 00:07:42.487 Delete I/O Submission Queue (00h): Supported 00:07:42.487 Create I/O Submission Queue (01h): Supported 00:07:42.487 Get Log Page (02h): Supported 00:07:42.487 Delete I/O Completion Queue (04h): Supported 00:07:42.487 Create I/O Completion Queue (05h): Supported 00:07:42.487 Identify (06h): Supported 00:07:42.487 Abort (08h): Supported 00:07:42.487 Set Features (09h): Supported 00:07:42.487 Get Features (0Ah): Supported 00:07:42.487 Asynchronous Event Request (0Ch): Supported 00:07:42.487 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:42.487 Directive Send (19h): Supported 00:07:42.487 Directive Receive (1Ah): Supported 00:07:42.487 Virtualization Management (1Ch): Supported 00:07:42.487 Doorbell Buffer Config (7Ch): Supported 00:07:42.487 Format NVM (80h): Supported LBA-Change 00:07:42.487 I/O Commands 00:07:42.487 ------------ 00:07:42.487 Flush (00h): Supported LBA-Change 00:07:42.487 Write (01h): Supported LBA-Change 00:07:42.487 Read (02h): Supported 00:07:42.487 Compare (05h): Supported 00:07:42.487 Write Zeroes (08h): Supported LBA-Change 00:07:42.488 Dataset Management (09h): Supported LBA-Change 00:07:42.488 Unknown (0Ch): Supported 00:07:42.488 Unknown (12h): Supported 00:07:42.488 Copy (19h): Supported LBA-Change 00:07:42.488 Unknown (1Dh): Supported LBA-Change 00:07:42.488 00:07:42.488 Error Log 00:07:42.488 ========= 00:07:42.488 00:07:42.488 Arbitration 00:07:42.488 =========== 00:07:42.488 Arbitration Burst: no limit 00:07:42.488 00:07:42.488 Power Management 00:07:42.488 ================ 00:07:42.488 Number of Power States: 1 00:07:42.488 Current Power State: Power State #0 00:07:42.488 Power State #0: 00:07:42.488 Max Power: 25.00 W 00:07:42.488 Non-Operational State: Operational 00:07:42.488 Entry Latency: 16 microseconds 00:07:42.488 Exit Latency: 4 microseconds 00:07:42.488 Relative Read Throughput: 0 00:07:42.488 Relative Read Latency: 0 00:07:42.488 Relative Write Throughput: 0 00:07:42.488 Relative Write Latency: 0 00:07:42.488 Idle Power: Not Reported 00:07:42.488 Active Power: Not Reported 00:07:42.488 Non-Operational Permissive Mode: Not Supported 00:07:42.488 00:07:42.488 Health Information 00:07:42.488 ================== 00:07:42.488 Critical Warnings: 00:07:42.488 Available Spare Space: OK 00:07:42.488 Temperature: OK 00:07:42.488 Device Reliability: OK 00:07:42.488 Read Only: No 00:07:42.488 Volatile Memory Backup: OK 00:07:42.488 Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.488 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:42.488 Available Spare: 0% 00:07:42.488 Available Spare Threshold: 0% 00:07:42.488 Life Percentage Used: 0% 00:07:42.488 Data Units Read: 1021 00:07:42.488 Data Units Written: 886 00:07:42.488 Host Read Commands: 53940 00:07:42.488 Host Write Commands: 52711 00:07:42.488 Controller Busy Time: 0 minutes 00:07:42.488 Power Cycles: 0 00:07:42.488 Power On Hours: 0 hours 00:07:42.488 Unsafe Shutdowns: 0 00:07:42.488 Unrecoverable Media Errors: 0 00:07:42.488 Lifetime Error Log Entries: 0 00:07:42.488 Warning Temperature Time: 0 minutes 00:07:42.488 Critical Temperature Time: 0 minutes 00:07:42.488 00:07:42.488 Number of Queues 00:07:42.488 ================ 00:07:42.488 Number of I/O Submission Queues: 64 00:07:42.488 Number of I/O Completion Queues: 64 00:07:42.488 00:07:42.488 ZNS Specific Controller Data 00:07:42.488 ============================ 00:07:42.488 Zone Append Size Limit: 0 00:07:42.488 00:07:42.488 00:07:42.488 Active Namespaces 00:07:42.488 ================= 00:07:42.488 Namespace ID:1 00:07:42.488 Error Recovery Timeout: Unlimited 00:07:42.488 Command Set Identifier: NVM (00h) 00:07:42.488 Deallocate: Supported 00:07:42.488 Deallocated/Unwritten Error: Supported 00:07:42.488 Deallocated Read Value: All 0x00 00:07:42.488 Deallocate in Write Zeroes: Not Supported 00:07:42.488 Deallocated Guard Field: 0xFFFF 00:07:42.488 Flush: Supported 00:07:42.488 Reservation: Not Supported 00:07:42.488 Namespace Sharing Capabilities: Private 00:07:42.488 Size (in LBAs): 1310720 (5GiB) 00:07:42.488 Capacity (in LBAs): 1310720 (5GiB) 00:07:42.488 Utilization (in LBAs): 1310720 (5GiB) 00:07:42.488 Thin Provisioning: Not Supported 00:07:42.488 Per-NS Atomic Units: No 00:07:42.488 Maximum Single Source Range Length: 128 00:07:42.488 Maximum Copy Length: 128 00:07:42.488 Maximum Source Range Count: 128 00:07:42.488 NGUID/EUI64 Never Reused: No 00:07:42.488 Namespace Write Protected: No 00:07:42.488 Number of LBA Formats: 8 00:07:42.488 Current LBA Format: LBA Format #04 00:07:42.488 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.488 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.488 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.488 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.488 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.488 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.488 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.488 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.488 00:07:42.488 NVM Specific Namespace Data 00:07:42.488 =========================== 00:07:42.488 Logical Block Storage Tag Mask: 0 00:07:42.488 Protection Information Capabilities: 00:07:42.488 16b Guard Protection Information Storage Tag Support: No 00:07:42.488 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.488 Storage Tag Check Read Support: No 00:07:42.488 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.488 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.488 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.488 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.488 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.488 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.488 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.488 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.488 ===================================================== 00:07:42.488 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:42.488 ===================================================== 00:07:42.488 Controller Capabilities/Features 00:07:42.488 ================================ 00:07:42.488 Vendor ID: 1b36 00:07:42.488 Subsystem Vendor ID: 1af4 00:07:42.488 Serial Number: 12343 00:07:42.488 Model Number: QEMU NVMe Ctrl 00:07:42.488 Firmware Version: 8.0.0 00:07:42.488 Recommended Arb Burst: 6 00:07:42.488 IEEE OUI Identifier: 00 54 52 00:07:42.488 Multi-path I/O 00:07:42.488 May have multiple subsystem ports: No 00:07:42.488 May have multiple controllers: Yes 00:07:42.488 Associated with SR-IOV VF: No 00:07:42.488 Max Data Transfer Size: 524288 00:07:42.488 Max Number of Namespaces: 256 00:07:42.488 Max Number of I/O Queues: 64 00:07:42.488 NVMe Specification Version (VS): 1.4 00:07:42.488 NVMe Specification Version (Identify): 1.4 00:07:42.488 Maximum Queue Entries: 2048 00:07:42.488 Contiguous Queues Required: Yes 00:07:42.488 Arbitration Mechanisms Supported 00:07:42.488 Weighted Round Robin: Not Supported 00:07:42.488 Vendor Specific: Not Supported 00:07:42.488 Reset Timeout: 7500 ms 00:07:42.488 Doorbell Stride: 4 bytes 00:07:42.488 NVM Subsystem Reset: Not Supported 00:07:42.488 Command Sets Supported 00:07:42.488 NVM Command Set: Supported 00:07:42.488 Boot Partition: Not Supported 00:07:42.488 Memory Page Size Minimum: 4096 bytes 00:07:42.488 Memory Page Size Maximum: 65536 bytes 00:07:42.488 Persistent Memory Region: Not Supported 00:07:42.488 Optional Asynchronous Events Supported 00:07:42.488 Namespace Attribute Notices: Supported 00:07:42.488 Firmware Activation Notices: Not Supported 00:07:42.488 ANA Change Notices: Not Supported 00:07:42.488 PLE Aggregate Log Change Notices: Not Supported 00:07:42.488 LBA Status Info Alert Notices: Not Supported 00:07:42.488 EGE Aggregate Log Change Notices: Not Supported 00:07:42.488 Normal NVM Subsystem Shutdown event: Not Supported 00:07:42.488 Zone Descriptor Change Notices: Not Supported 00:07:42.488 Discovery Log Change Notices: Not Supported 00:07:42.488 Controller Attributes 00:07:42.488 128-bit Host Identifier: Not Supported 00:07:42.488 Non-Operational Permissive Mode: Not Supported 00:07:42.488 NVM Sets: Not Supported 00:07:42.488 Read Recovery Levels: Not Supported 00:07:42.489 Endurance Groups: Supported 00:07:42.489 Predictable Latency Mode: Not Supported 00:07:42.489 Traffic Based Keep ALive: Not Supported 00:07:42.489 Namespace Granularity: Not Supported 00:07:42.489 SQ Associations: Not Supported 00:07:42.489 UUID List: Not Supported 00:07:42.489 Multi-Domain Subsystem: Not Supported 00:07:42.489 Fixed Capacity Management: Not Supported 00:07:42.489 Variable Capacity Management: Not Supported 00:07:42.489 Delete Endurance Group: Not Supported 00:07:42.489 Delete NVM Set: Not Supported 00:07:42.489 Extended LBA Formats Supported: Supported 00:07:42.489 Flexible Data Placement Supported: Supported 00:07:42.489 00:07:42.489 Controller Memory Buffer Support 00:07:42.489 ================================ 00:07:42.489 Supported: No 00:07:42.489 00:07:42.489 Persistent Memory Region Support 00:07:42.489 ================================ 00:07:42.489 Supported: No 00:07:42.489 00:07:42.489 Admin Command Set Attributes 00:07:42.489 ============================ 00:07:42.489 Security Send/Receive: Not Supported 00:07:42.489 Format NVM: Supported 00:07:42.489 Firmware Activate/Download: Not Supported 00:07:42.489 Namespace Management: Supported 00:07:42.489 Device Self-Test: Not Supported 00:07:42.489 Directives: Supported 00:07:42.489 NVMe-MI: Not Supported 00:07:42.489 Virtualization Management: Not Supported 00:07:42.489 Doorbell Buffer Config: Supported 00:07:42.489 Get LBA Status Capability: Not Supported 00:07:42.489 Command & Feature Lockdown Capability: Not Supported 00:07:42.489 Abort Command Limit: 4 00:07:42.489 Async Event Request Limit: 4 00:07:42.489 Number of Firmware Slots: N/A 00:07:42.489 Firmware Slot 1 Read-Only: N/A 00:07:42.489 Firmware Activation Without Reset: N/A 00:07:42.489 Multiple Update Detection Support: N/A 00:07:42.489 Firmware Update Granularity: No Information Provided 00:07:42.489 Per-Namespace SMART Log: Yes 00:07:42.489 Asymmetric Namespace Access Log Page: Not Supported 00:07:42.489 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:42.489 Command Effects Log Page: Supported 00:07:42.489 Get Log Page Extended Data: Supported 00:07:42.489 Telemetry Log Pages: Not Supported 00:07:42.489 Persistent Event Log Pages: Not Supported 00:07:42.489 Supported Log Pages Log Page: May Support 00:07:42.489 Commands Supported & Effects Log Page: Not Supported 00:07:42.489 Feature Identifiers & Effects Log Page:May Support 00:07:42.489 NVMe-MI Commands & Effects Log Page: May Support 00:07:42.489 Data Area 4 for Telemetry Log: Not Supported 00:07:42.489 Error Log Page Entries Supported: 1 00:07:42.489 Keep Alive: Not Supported 00:07:42.489 00:07:42.489 NVM Command Set Attributes 00:07:42.489 ========================== 00:07:42.489 Submission Queue Entry Size 00:07:42.489 Max: 64 00:07:42.489 Min: 64 00:07:42.489 Completion Queue Entry Size 00:07:42.489 Max: 16 00:07:42.489 Min: 16 00:07:42.489 Number of Namespaces: 256 00:07:42.489 Compare Command: Supported 00:07:42.489 Write Uncorrectable Command: Not Supported 00:07:42.489 Dataset Management Command: Supported 00:07:42.489 Write Zeroes Command: Supported 00:07:42.489 Set Features Save Field: Supported 00:07:42.489 Reservations: Not Supported 00:07:42.489 Timestamp: Supported 00:07:42.489 Copy: Supported 00:07:42.489 Volatile Write Cache: Present 00:07:42.489 Atomic Write Unit (Normal): 1 00:07:42.489 Atomic Write Unit (PFail): 1 00:07:42.489 Atomic Compare & Write Unit: 1 00:07:42.489 Fused Compare & Write: Not Supported 00:07:42.489 Scatter-Gather List 00:07:42.489 SGL Command Set: Supported 00:07:42.489 SGL Keyed: Not Supported 00:07:42.489 SGL Bit Bucket Descriptor: Not Supported 00:07:42.489 SGL Metadata Pointer: Not Supported 00:07:42.489 Oversized SGL: Not Supported 00:07:42.489 SGL Metadata Address: Not Supported 00:07:42.489 SGL Offset: Not Supported 00:07:42.489 Transport SGL Data Block: Not Supported 00:07:42.489 Replay Protected Memory Block: Not Supported 00:07:42.489 00:07:42.489 Firmware Slot Information 00:07:42.489 ========================= 00:07:42.489 Active slot: 1 00:07:42.489 Slot 1 Firmware Revision: 1.0 00:07:42.489 00:07:42.489 00:07:42.489 Commands Supported and Effects 00:07:42.489 ============================== 00:07:42.489 Admin Commands 00:07:42.489 -------------- 00:07:42.489 Delete I/O Submission Queue (00h): Supported 00:07:42.489 Create I/O Submission Queue (01h): Supported 00:07:42.489 Get Log Page (02h): Supported 00:07:42.489 Delete I/O Completion Queue (04h): Supported 00:07:42.489 Create I/O Completion Queue (05h): Supported 00:07:42.489 Identify (06h): Supported 00:07:42.489 Abort (08h): Supported 00:07:42.489 Set Features (09h): Supported 00:07:42.489 Get Features (0Ah): Supported 00:07:42.489 Asynchronous Event Request (0Ch): Supported 00:07:42.489 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:42.489 Directive Send (19h): Supported 00:07:42.489 Directive Receive (1Ah): Supported 00:07:42.489 Virtualization Management (1Ch): Supported 00:07:42.489 Doorbell Buffer Config (7Ch): Supported 00:07:42.489 Format NVM (80h): Supported LBA-Change 00:07:42.489 I/O Commands 00:07:42.489 ------------ 00:07:42.489 Flush (00h): Supported LBA-Change 00:07:42.489 Write (01h): Supported LBA-Change 00:07:42.489 Read (02h): Supported 00:07:42.489 Compare (05h): Supported 00:07:42.489 Write Zeroes (08h): Supported LBA-Change 00:07:42.489 Dataset Management (09h): Supported LBA-Change 00:07:42.489 Unknown (0Ch): Supported 00:07:42.489 Unknown (12h): Supported 00:07:42.489 Copy (19h): Supported LBA-Change 00:07:42.489 Unknown (1Dh): Supported LBA-Change 00:07:42.489 00:07:42.489 Error Log 00:07:42.489 ========= 00:07:42.489 00:07:42.489 Arbitration 00:07:42.489 =========== 00:07:42.489 Arbitration Burst: no limit 00:07:42.489 00:07:42.489 Power Management 00:07:42.489 ================ 00:07:42.489 Number of Power States: 1 00:07:42.489 Current Power State: Power State #0 00:07:42.489 Power State #0: 00:07:42.489 Max Power: 25.00 W 00:07:42.489 Non-Operational State: Operational 00:07:42.489 Entry Latency: 16 microseconds 00:07:42.489 Exit Latency: 4 microseconds 00:07:42.489 Relative Read Throughput: 0 00:07:42.489 Relative Read Latency: 0 00:07:42.489 Relative Write Throughput: 0 00:07:42.489 Relative Write Latency: 0 00:07:42.489 Idle Power: Not Reported 00:07:42.489 Active Power: Not Reported 00:07:42.489 Non-Operational Permissive Mode: Not Supported 00:07:42.489 00:07:42.489 Health Information 00:07:42.489 ================== 00:07:42.489 Critical Warnings: 00:07:42.489 Available Spare Space: OK 00:07:42.489 Temperature: OK 00:07:42.489 Device Reliability: OK 00:07:42.489 Read Only: No 00:07:42.489 Volatile Memory Backup: OK 00:07:42.489 Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.489 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:42.489 Available Spare: 0% 00:07:42.489 Available Spare Threshold: 0% 00:07:42.489 Life Percentage Used: ted unexpected 00:07:42.490 [2024-10-27 11:22:27.713539] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62699 terminated unexpected 00:07:42.490 [2024-10-27 11:22:27.714411] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62699 terminated unexpected 00:07:42.490 0% 00:07:42.490 Data Units Read: 906 00:07:42.490 Data Units Written: 835 00:07:42.490 Host Read Commands: 39494 00:07:42.490 Host Write Commands: 38918 00:07:42.490 Controller Busy Time: 0 minutes 00:07:42.490 Power Cycles: 0 00:07:42.490 Power On Hours: 0 hours 00:07:42.490 Unsafe Shutdowns: 0 00:07:42.490 Unrecoverable Media Errors: 0 00:07:42.490 Lifetime Error Log Entries: 0 00:07:42.490 Warning Temperature Time: 0 minutes 00:07:42.490 Critical Temperature Time: 0 minutes 00:07:42.490 00:07:42.490 Number of Queues 00:07:42.490 ================ 00:07:42.490 Number of I/O Submission Queues: 64 00:07:42.490 Number of I/O Completion Queues: 64 00:07:42.490 00:07:42.490 ZNS Specific Controller Data 00:07:42.490 ============================ 00:07:42.490 Zone Append Size Limit: 0 00:07:42.490 00:07:42.490 00:07:42.490 Active Namespaces 00:07:42.490 ================= 00:07:42.490 Namespace ID:1 00:07:42.490 Error Recovery Timeout: Unlimited 00:07:42.490 Command Set Identifier: NVM (00h) 00:07:42.490 Deallocate: Supported 00:07:42.490 Deallocated/Unwritten Error: Supported 00:07:42.490 Deallocated Read Value: All 0x00 00:07:42.490 Deallocate in Write Zeroes: Not Supported 00:07:42.490 Deallocated Guard Field: 0xFFFF 00:07:42.490 Flush: Supported 00:07:42.490 Reservation: Not Supported 00:07:42.490 Namespace Sharing Capabilities: Multiple Controllers 00:07:42.490 Size (in LBAs): 262144 (1GiB) 00:07:42.490 Capacity (in LBAs): 262144 (1GiB) 00:07:42.490 Utilization (in LBAs): 262144 (1GiB) 00:07:42.490 Thin Provisioning: Not Supported 00:07:42.490 Per-NS Atomic Units: No 00:07:42.490 Maximum Single Source Range Length: 128 00:07:42.490 Maximum Copy Length: 128 00:07:42.490 Maximum Source Range Count: 128 00:07:42.490 NGUID/EUI64 Never Reused: No 00:07:42.490 Namespace Write Protected: No 00:07:42.490 Endurance group ID: 1 00:07:42.490 Number of LBA Formats: 8 00:07:42.490 Current LBA Format: LBA Format #04 00:07:42.490 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.490 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.490 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.490 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.490 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.490 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.490 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.490 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.490 00:07:42.490 Get Feature FDP: 00:07:42.490 ================ 00:07:42.490 Enabled: Yes 00:07:42.490 FDP configuration index: 0 00:07:42.490 00:07:42.490 FDP configurations log page 00:07:42.490 =========================== 00:07:42.490 Number of FDP configurations: 1 00:07:42.490 Version: 0 00:07:42.490 Size: 112 00:07:42.490 FDP Configuration Descriptor: 0 00:07:42.490 Descriptor Size: 96 00:07:42.490 Reclaim Group Identifier format: 2 00:07:42.490 FDP Volatile Write Cache: Not Present 00:07:42.490 FDP Configuration: Valid 00:07:42.490 Vendor Specific Size: 0 00:07:42.490 Number of Reclaim Groups: 2 00:07:42.490 Number of Recalim Unit Handles: 8 00:07:42.490 Max Placement Identifiers: 128 00:07:42.490 Number of Namespaces Suppprted: 256 00:07:42.490 Reclaim unit Nominal Size: 6000000 bytes 00:07:42.490 Estimated Reclaim Unit Time Limit: Not Reported 00:07:42.490 RUH Desc #000: RUH Type: Initially Isolated 00:07:42.490 RUH Desc #001: RUH Type: Initially Isolated 00:07:42.490 RUH Desc #002: RUH Type: Initially Isolated 00:07:42.490 RUH Desc #003: RUH Type: Initially Isolated 00:07:42.490 RUH Desc #004: RUH Type: Initially Isolated 00:07:42.490 RUH Desc #005: RUH Type: Initially Isolated 00:07:42.490 RUH Desc #006: RUH Type: Initially Isolated 00:07:42.490 RUH Desc #007: RUH Type: Initially Isolated 00:07:42.490 00:07:42.490 FDP reclaim unit handle usage log page 00:07:42.490 ====================================== 00:07:42.490 Number of Reclaim Unit Handles: 8 00:07:42.490 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:42.490 RUH Usage Desc #001: RUH Attributes: Unused 00:07:42.490 RUH Usage Desc #002: RUH Attributes: Unused 00:07:42.490 RUH Usage Desc #003: RUH Attributes: Unused 00:07:42.490 RUH Usage Desc #004: RUH Attributes: Unused 00:07:42.490 RUH Usage Desc #005: RUH Attributes: Unused 00:07:42.490 RUH Usage Desc #006: RUH Attributes: Unused 00:07:42.490 RUH Usage Desc #007: RUH Attributes: Unused 00:07:42.490 00:07:42.490 FDP statistics log page 00:07:42.490 ======================= 00:07:42.490 Host bytes with metadata written: 519217152 00:07:42.490 Medi[2024-10-27 11:22:27.716384] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62699 terminated unexpected 00:07:42.490 a bytes with metadata written: 519274496 00:07:42.490 Media bytes erased: 0 00:07:42.490 00:07:42.490 FDP events log page 00:07:42.490 =================== 00:07:42.490 Number of FDP events: 0 00:07:42.490 00:07:42.490 NVM Specific Namespace Data 00:07:42.490 =========================== 00:07:42.490 Logical Block Storage Tag Mask: 0 00:07:42.490 Protection Information Capabilities: 00:07:42.490 16b Guard Protection Information Storage Tag Support: No 00:07:42.490 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.490 Storage Tag Check Read Support: No 00:07:42.490 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.490 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.490 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.490 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.490 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.490 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.490 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.490 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.490 ===================================================== 00:07:42.490 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:42.490 ===================================================== 00:07:42.490 Controller Capabilities/Features 00:07:42.490 ================================ 00:07:42.490 Vendor ID: 1b36 00:07:42.490 Subsystem Vendor ID: 1af4 00:07:42.490 Serial Number: 12342 00:07:42.490 Model Number: QEMU NVMe Ctrl 00:07:42.490 Firmware Version: 8.0.0 00:07:42.490 Recommended Arb Burst: 6 00:07:42.490 IEEE OUI Identifier: 00 54 52 00:07:42.490 Multi-path I/O 00:07:42.490 May have multiple subsystem ports: No 00:07:42.490 May have multiple controllers: No 00:07:42.490 Associated with SR-IOV VF: No 00:07:42.490 Max Data Transfer Size: 524288 00:07:42.490 Max Number of Namespaces: 256 00:07:42.490 Max Number of I/O Queues: 64 00:07:42.491 NVMe Specification Version (VS): 1.4 00:07:42.491 NVMe Specification Version (Identify): 1.4 00:07:42.491 Maximum Queue Entries: 2048 00:07:42.491 Contiguous Queues Required: Yes 00:07:42.491 Arbitration Mechanisms Supported 00:07:42.491 Weighted Round Robin: Not Supported 00:07:42.491 Vendor Specific: Not Supported 00:07:42.491 Reset Timeout: 7500 ms 00:07:42.491 Doorbell Stride: 4 bytes 00:07:42.491 NVM Subsystem Reset: Not Supported 00:07:42.491 Command Sets Supported 00:07:42.491 NVM Command Set: Supported 00:07:42.491 Boot Partition: Not Supported 00:07:42.491 Memory Page Size Minimum: 4096 bytes 00:07:42.491 Memory Page Size Maximum: 65536 bytes 00:07:42.491 Persistent Memory Region: Not Supported 00:07:42.491 Optional Asynchronous Events Supported 00:07:42.491 Namespace Attribute Notices: Supported 00:07:42.491 Firmware Activation Notices: Not Supported 00:07:42.491 ANA Change Notices: Not Supported 00:07:42.491 PLE Aggregate Log Change Notices: Not Supported 00:07:42.491 LBA Status Info Alert Notices: Not Supported 00:07:42.491 EGE Aggregate Log Change Notices: Not Supported 00:07:42.491 Normal NVM Subsystem Shutdown event: Not Supported 00:07:42.491 Zone Descriptor Change Notices: Not Supported 00:07:42.491 Discovery Log Change Notices: Not Supported 00:07:42.491 Controller Attributes 00:07:42.491 128-bit Host Identifier: Not Supported 00:07:42.491 Non-Operational Permissive Mode: Not Supported 00:07:42.491 NVM Sets: Not Supported 00:07:42.491 Read Recovery Levels: Not Supported 00:07:42.491 Endurance Groups: Not Supported 00:07:42.491 Predictable Latency Mode: Not Supported 00:07:42.491 Traffic Based Keep ALive: Not Supported 00:07:42.491 Namespace Granularity: Not Supported 00:07:42.491 SQ Associations: Not Supported 00:07:42.491 UUID List: Not Supported 00:07:42.491 Multi-Domain Subsystem: Not Supported 00:07:42.491 Fixed Capacity Management: Not Supported 00:07:42.491 Variable Capacity Management: Not Supported 00:07:42.491 Delete Endurance Group: Not Supported 00:07:42.491 Delete NVM Set: Not Supported 00:07:42.491 Extended LBA Formats Supported: Supported 00:07:42.491 Flexible Data Placement Supported: Not Supported 00:07:42.491 00:07:42.491 Controller Memory Buffer Support 00:07:42.491 ================================ 00:07:42.491 Supported: No 00:07:42.491 00:07:42.491 Persistent Memory Region Support 00:07:42.491 ================================ 00:07:42.491 Supported: No 00:07:42.491 00:07:42.491 Admin Command Set Attributes 00:07:42.491 ============================ 00:07:42.491 Security Send/Receive: Not Supported 00:07:42.491 Format NVM: Supported 00:07:42.491 Firmware Activate/Download: Not Supported 00:07:42.491 Namespace Management: Supported 00:07:42.491 Device Self-Test: Not Supported 00:07:42.491 Directives: Supported 00:07:42.491 NVMe-MI: Not Supported 00:07:42.491 Virtualization Management: Not Supported 00:07:42.491 Doorbell Buffer Config: Supported 00:07:42.491 Get LBA Status Capability: Not Supported 00:07:42.491 Command & Feature Lockdown Capability: Not Supported 00:07:42.491 Abort Command Limit: 4 00:07:42.491 Async Event Request Limit: 4 00:07:42.491 Number of Firmware Slots: N/A 00:07:42.491 Firmware Slot 1 Read-Only: N/A 00:07:42.491 Firmware Activation Without Reset: N/A 00:07:42.491 Multiple Update Detection Support: N/A 00:07:42.491 Firmware Update Granularity: No Information Provided 00:07:42.491 Per-Namespace SMART Log: Yes 00:07:42.491 Asymmetric Namespace Access Log Page: Not Supported 00:07:42.491 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:42.491 Command Effects Log Page: Supported 00:07:42.491 Get Log Page Extended Data: Supported 00:07:42.491 Telemetry Log Pages: Not Supported 00:07:42.491 Persistent Event Log Pages: Not Supported 00:07:42.491 Supported Log Pages Log Page: May Support 00:07:42.491 Commands Supported & Effects Log Page: Not Supported 00:07:42.491 Feature Identifiers & Effects Log Page:May Support 00:07:42.491 NVMe-MI Commands & Effects Log Page: May Support 00:07:42.491 Data Area 4 for Telemetry Log: Not Supported 00:07:42.491 Error Log Page Entries Supported: 1 00:07:42.491 Keep Alive: Not Supported 00:07:42.491 00:07:42.491 NVM Command Set Attributes 00:07:42.491 ========================== 00:07:42.491 Submission Queue Entry Size 00:07:42.491 Max: 64 00:07:42.491 Min: 64 00:07:42.491 Completion Queue Entry Size 00:07:42.491 Max: 16 00:07:42.491 Min: 16 00:07:42.491 Number of Namespaces: 256 00:07:42.491 Compare Command: Supported 00:07:42.491 Write Uncorrectable Command: Not Supported 00:07:42.491 Dataset Management Command: Supported 00:07:42.491 Write Zeroes Command: Supported 00:07:42.491 Set Features Save Field: Supported 00:07:42.491 Reservations: Not Supported 00:07:42.491 Timestamp: Supported 00:07:42.491 Copy: Supported 00:07:42.491 Volatile Write Cache: Present 00:07:42.491 Atomic Write Unit (Normal): 1 00:07:42.491 Atomic Write Unit (PFail): 1 00:07:42.491 Atomic Compare & Write Unit: 1 00:07:42.491 Fused Compare & Write: Not Supported 00:07:42.491 Scatter-Gather List 00:07:42.491 SGL Command Set: Supported 00:07:42.491 SGL Keyed: Not Supported 00:07:42.491 SGL Bit Bucket Descriptor: Not Supported 00:07:42.491 SGL Metadata Pointer: Not Supported 00:07:42.491 Oversized SGL: Not Supported 00:07:42.491 SGL Metadata Address: Not Supported 00:07:42.491 SGL Offset: Not Supported 00:07:42.491 Transport SGL Data Block: Not Supported 00:07:42.491 Replay Protected Memory Block: Not Supported 00:07:42.491 00:07:42.491 Firmware Slot Information 00:07:42.491 ========================= 00:07:42.491 Active slot: 1 00:07:42.491 Slot 1 Firmware Revision: 1.0 00:07:42.491 00:07:42.492 00:07:42.492 Commands Supported and Effects 00:07:42.492 ============================== 00:07:42.492 Admin Commands 00:07:42.492 -------------- 00:07:42.492 Delete I/O Submission Queue (00h): Supported 00:07:42.492 Create I/O Submission Queue (01h): Supported 00:07:42.492 Get Log Page (02h): Supported 00:07:42.492 Delete I/O Completion Queue (04h): Supported 00:07:42.492 Create I/O Completion Queue (05h): Supported 00:07:42.492 Identify (06h): Supported 00:07:42.492 Abort (08h): Supported 00:07:42.492 Set Features (09h): Supported 00:07:42.492 Get Features (0Ah): Supported 00:07:42.492 Asynchronous Event Request (0Ch): Supported 00:07:42.492 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:42.492 Directive Send (19h): Supported 00:07:42.492 Directive Receive (1Ah): Supported 00:07:42.492 Virtualization Management (1Ch): Supported 00:07:42.492 Doorbell Buffer Config (7Ch): Supported 00:07:42.492 Format NVM (80h): Supported LBA-Change 00:07:42.492 I/O Commands 00:07:42.492 ------------ 00:07:42.492 Flush (00h): Supported LBA-Change 00:07:42.492 Write (01h): Supported LBA-Change 00:07:42.492 Read (02h): Supported 00:07:42.492 Compare (05h): Supported 00:07:42.492 Write Zeroes (08h): Supported LBA-Change 00:07:42.492 Dataset Management (09h): Supported LBA-Change 00:07:42.492 Unknown (0Ch): Supported 00:07:42.492 Unknown (12h): Supported 00:07:42.492 Copy (19h): Supported LBA-Change 00:07:42.492 Unknown (1Dh): Supported LBA-Change 00:07:42.492 00:07:42.492 Error Log 00:07:42.492 ========= 00:07:42.492 00:07:42.492 Arbitration 00:07:42.492 =========== 00:07:42.492 Arbitration Burst: no limit 00:07:42.492 00:07:42.492 Power Management 00:07:42.492 ================ 00:07:42.492 Number of Power States: 1 00:07:42.492 Current Power State: Power State #0 00:07:42.492 Power State #0: 00:07:42.492 Max Power: 25.00 W 00:07:42.492 Non-Operational State: Operational 00:07:42.492 Entry Latency: 16 microseconds 00:07:42.492 Exit Latency: 4 microseconds 00:07:42.492 Relative Read Throughput: 0 00:07:42.492 Relative Read Latency: 0 00:07:42.492 Relative Write Throughput: 0 00:07:42.492 Relative Write Latency: 0 00:07:42.492 Idle Power: Not Reported 00:07:42.492 Active Power: Not Reported 00:07:42.492 Non-Operational Permissive Mode: Not Supported 00:07:42.492 00:07:42.492 Health Information 00:07:42.492 ================== 00:07:42.492 Critical Warnings: 00:07:42.492 Available Spare Space: OK 00:07:42.492 Temperature: OK 00:07:42.492 Device Reliability: OK 00:07:42.492 Read Only: No 00:07:42.492 Volatile Memory Backup: OK 00:07:42.492 Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.492 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:42.492 Available Spare: 0% 00:07:42.492 Available Spare Threshold: 0% 00:07:42.492 Life Percentage Used: 0% 00:07:42.492 Data Units Read: 2206 00:07:42.492 Data Units Written: 1993 00:07:42.492 Host Read Commands: 114012 00:07:42.492 Host Write Commands: 112281 00:07:42.492 Controller Busy Time: 0 minutes 00:07:42.492 Power Cycles: 0 00:07:42.492 Power On Hours: 0 hours 00:07:42.492 Unsafe Shutdowns: 0 00:07:42.492 Unrecoverable Media Errors: 0 00:07:42.492 Lifetime Error Log Entries: 0 00:07:42.492 Warning Temperature Time: 0 minutes 00:07:42.492 Critical Temperature Time: 0 minutes 00:07:42.492 00:07:42.492 Number of Queues 00:07:42.492 ================ 00:07:42.492 Number of I/O Submission Queues: 64 00:07:42.492 Number of I/O Completion Queues: 64 00:07:42.492 00:07:42.492 ZNS Specific Controller Data 00:07:42.492 ============================ 00:07:42.492 Zone Append Size Limit: 0 00:07:42.492 00:07:42.492 00:07:42.492 Active Namespaces 00:07:42.492 ================= 00:07:42.492 Namespace ID:1 00:07:42.492 Error Recovery Timeout: Unlimited 00:07:42.492 Command Set Identifier: NVM (00h) 00:07:42.492 Deallocate: Supported 00:07:42.492 Deallocated/Unwritten Error: Supported 00:07:42.492 Deallocated Read Value: All 0x00 00:07:42.492 Deallocate in Write Zeroes: Not Supported 00:07:42.492 Deallocated Guard Field: 0xFFFF 00:07:42.492 Flush: Supported 00:07:42.492 Reservation: Not Supported 00:07:42.492 Namespace Sharing Capabilities: Private 00:07:42.492 Size (in LBAs): 1048576 (4GiB) 00:07:42.492 Capacity (in LBAs): 1048576 (4GiB) 00:07:42.492 Utilization (in LBAs): 1048576 (4GiB) 00:07:42.492 Thin Provisioning: Not Supported 00:07:42.492 Per-NS Atomic Units: No 00:07:42.492 Maximum Single Source Range Length: 128 00:07:42.492 Maximum Copy Length: 128 00:07:42.492 Maximum Source Range Count: 128 00:07:42.492 NGUID/EUI64 Never Reused: No 00:07:42.492 Namespace Write Protected: No 00:07:42.492 Number of LBA Formats: 8 00:07:42.492 Current LBA Format: LBA Format #04 00:07:42.492 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.492 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.492 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.492 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.492 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.492 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.492 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.492 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.492 00:07:42.492 NVM Specific Namespace Data 00:07:42.492 =========================== 00:07:42.492 Logical Block Storage Tag Mask: 0 00:07:42.492 Protection Information Capabilities: 00:07:42.492 16b Guard Protection Information Storage Tag Support: No 00:07:42.492 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.492 Storage Tag Check Read Support: No 00:07:42.492 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.492 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.492 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.492 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.492 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.492 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.492 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.492 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.492 Namespace ID:2 00:07:42.492 Error Recovery Timeout: Unlimited 00:07:42.492 Command Set Identifier: NVM (00h) 00:07:42.492 Deallocate: Supported 00:07:42.492 Deallocated/Unwritten Error: Supported 00:07:42.492 Deallocated Read Value: All 0x00 00:07:42.492 Deallocate in Write Zeroes: Not Supported 00:07:42.492 Deallocated Guard Field: 0xFFFF 00:07:42.492 Flush: Supported 00:07:42.492 Reservation: Not Supported 00:07:42.492 Namespace Sharing Capabilities: Private 00:07:42.492 Size (in LBAs): 1048576 (4GiB) 00:07:42.492 Capacity (in LBAs): 1048576 (4GiB) 00:07:42.492 Utilization (in LBAs): 1048576 (4GiB) 00:07:42.492 Thin Provisioning: Not Supported 00:07:42.493 Per-NS Atomic Units: No 00:07:42.493 Maximum Single Source Range Length: 128 00:07:42.493 Maximum Copy Length: 128 00:07:42.493 Maximum Source Range Count: 128 00:07:42.493 NGUID/EUI64 Never Reused: No 00:07:42.493 Namespace Write Protected: No 00:07:42.493 Number of LBA Formats: 8 00:07:42.493 Current LBA Format: LBA Format #04 00:07:42.493 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.493 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.493 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.493 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.493 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.493 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.493 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.493 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.493 00:07:42.493 NVM Specific Namespace Data 00:07:42.493 =========================== 00:07:42.493 Logical Block Storage Tag Mask: 0 00:07:42.493 Protection Information Capabilities: 00:07:42.493 16b Guard Protection Information Storage Tag Support: No 00:07:42.493 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.493 Storage Tag Check Read Support: No 00:07:42.493 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Namespace ID:3 00:07:42.493 Error Recovery Timeout: Unlimited 00:07:42.493 Command Set Identifier: NVM (00h) 00:07:42.493 Deallocate: Supported 00:07:42.493 Deallocated/Unwritten Error: Supported 00:07:42.493 Deallocated Read Value: All 0x00 00:07:42.493 Deallocate in Write Zeroes: Not Supported 00:07:42.493 Deallocated Guard Field: 0xFFFF 00:07:42.493 Flush: Supported 00:07:42.493 Reservation: Not Supported 00:07:42.493 Namespace Sharing Capabilities: Private 00:07:42.493 Size (in LBAs): 1048576 (4GiB) 00:07:42.493 Capacity (in LBAs): 1048576 (4GiB) 00:07:42.493 Utilization (in LBAs): 1048576 (4GiB) 00:07:42.493 Thin Provisioning: Not Supported 00:07:42.493 Per-NS Atomic Units: No 00:07:42.493 Maximum Single Source Range Length: 128 00:07:42.493 Maximum Copy Length: 128 00:07:42.493 Maximum Source Range Count: 128 00:07:42.493 NGUID/EUI64 Never Reused: No 00:07:42.493 Namespace Write Protected: No 00:07:42.493 Number of LBA Formats: 8 00:07:42.493 Current LBA Format: LBA Format #04 00:07:42.493 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.493 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.493 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.493 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.493 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.493 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.493 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.493 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.493 00:07:42.493 NVM Specific Namespace Data 00:07:42.493 =========================== 00:07:42.493 Logical Block Storage Tag Mask: 0 00:07:42.493 Protection Information Capabilities: 00:07:42.493 16b Guard Protection Information Storage Tag Support: No 00:07:42.493 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.493 Storage Tag Check Read Support: No 00:07:42.493 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.493 11:22:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:42.493 11:22:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:42.758 ===================================================== 00:07:42.758 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:42.758 ===================================================== 00:07:42.758 Controller Capabilities/Features 00:07:42.758 ================================ 00:07:42.758 Vendor ID: 1b36 00:07:42.758 Subsystem Vendor ID: 1af4 00:07:42.758 Serial Number: 12340 00:07:42.758 Model Number: QEMU NVMe Ctrl 00:07:42.758 Firmware Version: 8.0.0 00:07:42.758 Recommended Arb Burst: 6 00:07:42.758 IEEE OUI Identifier: 00 54 52 00:07:42.758 Multi-path I/O 00:07:42.758 May have multiple subsystem ports: No 00:07:42.758 May have multiple controllers: No 00:07:42.758 Associated with SR-IOV VF: No 00:07:42.758 Max Data Transfer Size: 524288 00:07:42.758 Max Number of Namespaces: 256 00:07:42.758 Max Number of I/O Queues: 64 00:07:42.758 NVMe Specification Version (VS): 1.4 00:07:42.758 NVMe Specification Version (Identify): 1.4 00:07:42.758 Maximum Queue Entries: 2048 00:07:42.758 Contiguous Queues Required: Yes 00:07:42.758 Arbitration Mechanisms Supported 00:07:42.758 Weighted Round Robin: Not Supported 00:07:42.758 Vendor Specific: Not Supported 00:07:42.758 Reset Timeout: 7500 ms 00:07:42.758 Doorbell Stride: 4 bytes 00:07:42.759 NVM Subsystem Reset: Not Supported 00:07:42.759 Command Sets Supported 00:07:42.759 NVM Command Set: Supported 00:07:42.759 Boot Partition: Not Supported 00:07:42.759 Memory Page Size Minimum: 4096 bytes 00:07:42.759 Memory Page Size Maximum: 65536 bytes 00:07:42.759 Persistent Memory Region: Not Supported 00:07:42.759 Optional Asynchronous Events Supported 00:07:42.759 Namespace Attribute Notices: Supported 00:07:42.759 Firmware Activation Notices: Not Supported 00:07:42.759 ANA Change Notices: Not Supported 00:07:42.759 PLE Aggregate Log Change Notices: Not Supported 00:07:42.759 LBA Status Info Alert Notices: Not Supported 00:07:42.759 EGE Aggregate Log Change Notices: Not Supported 00:07:42.759 Normal NVM Subsystem Shutdown event: Not Supported 00:07:42.759 Zone Descriptor Change Notices: Not Supported 00:07:42.759 Discovery Log Change Notices: Not Supported 00:07:42.759 Controller Attributes 00:07:42.759 128-bit Host Identifier: Not Supported 00:07:42.759 Non-Operational Permissive Mode: Not Supported 00:07:42.759 NVM Sets: Not Supported 00:07:42.759 Read Recovery Levels: Not Supported 00:07:42.759 Endurance Groups: Not Supported 00:07:42.759 Predictable Latency Mode: Not Supported 00:07:42.759 Traffic Based Keep ALive: Not Supported 00:07:42.759 Namespace Granularity: Not Supported 00:07:42.759 SQ Associations: Not Supported 00:07:42.759 UUID List: Not Supported 00:07:42.759 Multi-Domain Subsystem: Not Supported 00:07:42.759 Fixed Capacity Management: Not Supported 00:07:42.759 Variable Capacity Management: Not Supported 00:07:42.759 Delete Endurance Group: Not Supported 00:07:42.759 Delete NVM Set: Not Supported 00:07:42.759 Extended LBA Formats Supported: Supported 00:07:42.759 Flexible Data Placement Supported: Not Supported 00:07:42.759 00:07:42.759 Controller Memory Buffer Support 00:07:42.759 ================================ 00:07:42.759 Supported: No 00:07:42.759 00:07:42.759 Persistent Memory Region Support 00:07:42.759 ================================ 00:07:42.759 Supported: No 00:07:42.759 00:07:42.759 Admin Command Set Attributes 00:07:42.759 ============================ 00:07:42.759 Security Send/Receive: Not Supported 00:07:42.759 Format NVM: Supported 00:07:42.759 Firmware Activate/Download: Not Supported 00:07:42.759 Namespace Management: Supported 00:07:42.759 Device Self-Test: Not Supported 00:07:42.759 Directives: Supported 00:07:42.759 NVMe-MI: Not Supported 00:07:42.759 Virtualization Management: Not Supported 00:07:42.759 Doorbell Buffer Config: Supported 00:07:42.759 Get LBA Status Capability: Not Supported 00:07:42.759 Command & Feature Lockdown Capability: Not Supported 00:07:42.759 Abort Command Limit: 4 00:07:42.759 Async Event Request Limit: 4 00:07:42.759 Number of Firmware Slots: N/A 00:07:42.759 Firmware Slot 1 Read-Only: N/A 00:07:42.759 Firmware Activation Without Reset: N/A 00:07:42.759 Multiple Update Detection Support: N/A 00:07:42.759 Firmware Update Granularity: No Information Provided 00:07:42.759 Per-Namespace SMART Log: Yes 00:07:42.759 Asymmetric Namespace Access Log Page: Not Supported 00:07:42.759 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:42.759 Command Effects Log Page: Supported 00:07:42.759 Get Log Page Extended Data: Supported 00:07:42.759 Telemetry Log Pages: Not Supported 00:07:42.759 Persistent Event Log Pages: Not Supported 00:07:42.759 Supported Log Pages Log Page: May Support 00:07:42.759 Commands Supported & Effects Log Page: Not Supported 00:07:42.759 Feature Identifiers & Effects Log Page:May Support 00:07:42.759 NVMe-MI Commands & Effects Log Page: May Support 00:07:42.759 Data Area 4 for Telemetry Log: Not Supported 00:07:42.759 Error Log Page Entries Supported: 1 00:07:42.759 Keep Alive: Not Supported 00:07:42.759 00:07:42.759 NVM Command Set Attributes 00:07:42.759 ========================== 00:07:42.759 Submission Queue Entry Size 00:07:42.759 Max: 64 00:07:42.759 Min: 64 00:07:42.759 Completion Queue Entry Size 00:07:42.759 Max: 16 00:07:42.759 Min: 16 00:07:42.759 Number of Namespaces: 256 00:07:42.759 Compare Command: Supported 00:07:42.759 Write Uncorrectable Command: Not Supported 00:07:42.759 Dataset Management Command: Supported 00:07:42.759 Write Zeroes Command: Supported 00:07:42.759 Set Features Save Field: Supported 00:07:42.759 Reservations: Not Supported 00:07:42.759 Timestamp: Supported 00:07:42.759 Copy: Supported 00:07:42.759 Volatile Write Cache: Present 00:07:42.759 Atomic Write Unit (Normal): 1 00:07:42.759 Atomic Write Unit (PFail): 1 00:07:42.759 Atomic Compare & Write Unit: 1 00:07:42.759 Fused Compare & Write: Not Supported 00:07:42.759 Scatter-Gather List 00:07:42.759 SGL Command Set: Supported 00:07:42.759 SGL Keyed: Not Supported 00:07:42.759 SGL Bit Bucket Descriptor: Not Supported 00:07:42.759 SGL Metadata Pointer: Not Supported 00:07:42.759 Oversized SGL: Not Supported 00:07:42.759 SGL Metadata Address: Not Supported 00:07:42.759 SGL Offset: Not Supported 00:07:42.759 Transport SGL Data Block: Not Supported 00:07:42.759 Replay Protected Memory Block: Not Supported 00:07:42.759 00:07:42.759 Firmware Slot Information 00:07:42.759 ========================= 00:07:42.759 Active slot: 1 00:07:42.759 Slot 1 Firmware Revision: 1.0 00:07:42.759 00:07:42.759 00:07:42.759 Commands Supported and Effects 00:07:42.759 ============================== 00:07:42.759 Admin Commands 00:07:42.759 -------------- 00:07:42.759 Delete I/O Submission Queue (00h): Supported 00:07:42.759 Create I/O Submission Queue (01h): Supported 00:07:42.759 Get Log Page (02h): Supported 00:07:42.759 Delete I/O Completion Queue (04h): Supported 00:07:42.759 Create I/O Completion Queue (05h): Supported 00:07:42.759 Identify (06h): Supported 00:07:42.759 Abort (08h): Supported 00:07:42.759 Set Features (09h): Supported 00:07:42.759 Get Features (0Ah): Supported 00:07:42.759 Asynchronous Event Request (0Ch): Supported 00:07:42.759 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:42.759 Directive Send (19h): Supported 00:07:42.759 Directive Receive (1Ah): Supported 00:07:42.759 Virtualization Management (1Ch): Supported 00:07:42.759 Doorbell Buffer Config (7Ch): Supported 00:07:42.759 Format NVM (80h): Supported LBA-Change 00:07:42.759 I/O Commands 00:07:42.759 ------------ 00:07:42.759 Flush (00h): Supported LBA-Change 00:07:42.759 Write (01h): Supported LBA-Change 00:07:42.759 Read (02h): Supported 00:07:42.759 Compare (05h): Supported 00:07:42.759 Write Zeroes (08h): Supported LBA-Change 00:07:42.759 Dataset Management (09h): Supported LBA-Change 00:07:42.759 Unknown (0Ch): Supported 00:07:42.759 Unknown (12h): Supported 00:07:42.759 Copy (19h): Supported LBA-Change 00:07:42.759 Unknown (1Dh): Supported LBA-Change 00:07:42.759 00:07:42.759 Error Log 00:07:42.759 ========= 00:07:42.759 00:07:42.759 Arbitration 00:07:42.759 =========== 00:07:42.759 Arbitration Burst: no limit 00:07:42.759 00:07:42.759 Power Management 00:07:42.760 ================ 00:07:42.760 Number of Power States: 1 00:07:42.760 Current Power State: Power State #0 00:07:42.760 Power State #0: 00:07:42.760 Max Power: 25.00 W 00:07:42.760 Non-Operational State: Operational 00:07:42.760 Entry Latency: 16 microseconds 00:07:42.760 Exit Latency: 4 microseconds 00:07:42.760 Relative Read Throughput: 0 00:07:42.760 Relative Read Latency: 0 00:07:42.760 Relative Write Throughput: 0 00:07:42.760 Relative Write Latency: 0 00:07:42.760 Idle Power: Not Reported 00:07:42.760 Active Power: Not Reported 00:07:42.760 Non-Operational Permissive Mode: Not Supported 00:07:42.760 00:07:42.760 Health Information 00:07:42.760 ================== 00:07:42.760 Critical Warnings: 00:07:42.760 Available Spare Space: OK 00:07:42.760 Temperature: OK 00:07:42.760 Device Reliability: OK 00:07:42.760 Read Only: No 00:07:42.760 Volatile Memory Backup: OK 00:07:42.760 Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.760 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:42.760 Available Spare: 0% 00:07:42.760 Available Spare Threshold: 0% 00:07:42.760 Life Percentage Used: 0% 00:07:42.760 Data Units Read: 682 00:07:42.760 Data Units Written: 610 00:07:42.760 Host Read Commands: 37207 00:07:42.760 Host Write Commands: 36993 00:07:42.760 Controller Busy Time: 0 minutes 00:07:42.760 Power Cycles: 0 00:07:42.760 Power On Hours: 0 hours 00:07:42.760 Unsafe Shutdowns: 0 00:07:42.760 Unrecoverable Media Errors: 0 00:07:42.760 Lifetime Error Log Entries: 0 00:07:42.760 Warning Temperature Time: 0 minutes 00:07:42.760 Critical Temperature Time: 0 minutes 00:07:42.760 00:07:42.760 Number of Queues 00:07:42.760 ================ 00:07:42.760 Number of I/O Submission Queues: 64 00:07:42.760 Number of I/O Completion Queues: 64 00:07:42.760 00:07:42.760 ZNS Specific Controller Data 00:07:42.760 ============================ 00:07:42.760 Zone Append Size Limit: 0 00:07:42.760 00:07:42.760 00:07:42.760 Active Namespaces 00:07:42.760 ================= 00:07:42.760 Namespace ID:1 00:07:42.760 Error Recovery Timeout: Unlimited 00:07:42.760 Command Set Identifier: NVM (00h) 00:07:42.760 Deallocate: Supported 00:07:42.760 Deallocated/Unwritten Error: Supported 00:07:42.760 Deallocated Read Value: All 0x00 00:07:42.760 Deallocate in Write Zeroes: Not Supported 00:07:42.760 Deallocated Guard Field: 0xFFFF 00:07:42.760 Flush: Supported 00:07:42.760 Reservation: Not Supported 00:07:42.760 Metadata Transferred as: Separate Metadata Buffer 00:07:42.760 Namespace Sharing Capabilities: Private 00:07:42.760 Size (in LBAs): 1548666 (5GiB) 00:07:42.760 Capacity (in LBAs): 1548666 (5GiB) 00:07:42.760 Utilization (in LBAs): 1548666 (5GiB) 00:07:42.760 Thin Provisioning: Not Supported 00:07:42.760 Per-NS Atomic Units: No 00:07:42.760 Maximum Single Source Range Length: 128 00:07:42.760 Maximum Copy Length: 128 00:07:42.760 Maximum Source Range Count: 128 00:07:42.760 NGUID/EUI64 Never Reused: No 00:07:42.760 Namespace Write Protected: No 00:07:42.760 Number of LBA Formats: 8 00:07:42.760 Current LBA Format: LBA Format #07 00:07:42.760 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.760 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.760 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.760 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.760 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.760 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.760 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.760 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.760 00:07:42.760 NVM Specific Namespace Data 00:07:42.760 =========================== 00:07:42.760 Logical Block Storage Tag Mask: 0 00:07:42.760 Protection Information Capabilities: 00:07:42.760 16b Guard Protection Information Storage Tag Support: No 00:07:42.760 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.760 Storage Tag Check Read Support: No 00:07:42.760 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 11:22:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:42.760 11:22:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:43.032 ===================================================== 00:07:43.032 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:43.032 ===================================================== 00:07:43.032 Controller Capabilities/Features 00:07:43.032 ================================ 00:07:43.032 Vendor ID: 1b36 00:07:43.032 Subsystem Vendor ID: 1af4 00:07:43.032 Serial Number: 12341 00:07:43.032 Model Number: QEMU NVMe Ctrl 00:07:43.032 Firmware Version: 8.0.0 00:07:43.032 Recommended Arb Burst: 6 00:07:43.032 IEEE OUI Identifier: 00 54 52 00:07:43.032 Multi-path I/O 00:07:43.032 May have multiple subsystem ports: No 00:07:43.032 May have multiple controllers: No 00:07:43.032 Associated with SR-IOV VF: No 00:07:43.032 Max Data Transfer Size: 524288 00:07:43.032 Max Number of Namespaces: 256 00:07:43.032 Max Number of I/O Queues: 64 00:07:43.032 NVMe Specification Version (VS): 1.4 00:07:43.032 NVMe Specification Version (Identify): 1.4 00:07:43.032 Maximum Queue Entries: 2048 00:07:43.032 Contiguous Queues Required: Yes 00:07:43.032 Arbitration Mechanisms Supported 00:07:43.032 Weighted Round Robin: Not Supported 00:07:43.032 Vendor Specific: Not Supported 00:07:43.032 Reset Timeout: 7500 ms 00:07:43.032 Doorbell Stride: 4 bytes 00:07:43.032 NVM Subsystem Reset: Not Supported 00:07:43.032 Command Sets Supported 00:07:43.032 NVM Command Set: Supported 00:07:43.032 Boot Partition: Not Supported 00:07:43.032 Memory Page Size Minimum: 4096 bytes 00:07:43.032 Memory Page Size Maximum: 65536 bytes 00:07:43.032 Persistent Memory Region: Not Supported 00:07:43.032 Optional Asynchronous Events Supported 00:07:43.032 Namespace Attribute Notices: Supported 00:07:43.032 Firmware Activation Notices: Not Supported 00:07:43.032 ANA Change Notices: Not Supported 00:07:43.032 PLE Aggregate Log Change Notices: Not Supported 00:07:43.032 LBA Status Info Alert Notices: Not Supported 00:07:43.032 EGE Aggregate Log Change Notices: Not Supported 00:07:43.032 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.032 Zone Descriptor Change Notices: Not Supported 00:07:43.032 Discovery Log Change Notices: Not Supported 00:07:43.032 Controller Attributes 00:07:43.032 128-bit Host Identifier: Not Supported 00:07:43.032 Non-Operational Permissive Mode: Not Supported 00:07:43.032 NVM Sets: Not Supported 00:07:43.032 Read Recovery Levels: Not Supported 00:07:43.032 Endurance Groups: Not Supported 00:07:43.032 Predictable Latency Mode: Not Supported 00:07:43.032 Traffic Based Keep ALive: Not Supported 00:07:43.033 Namespace Granularity: Not Supported 00:07:43.033 SQ Associations: Not Supported 00:07:43.033 UUID List: Not Supported 00:07:43.033 Multi-Domain Subsystem: Not Supported 00:07:43.033 Fixed Capacity Management: Not Supported 00:07:43.033 Variable Capacity Management: Not Supported 00:07:43.033 Delete Endurance Group: Not Supported 00:07:43.033 Delete NVM Set: Not Supported 00:07:43.033 Extended LBA Formats Supported: Supported 00:07:43.033 Flexible Data Placement Supported: Not Supported 00:07:43.033 00:07:43.033 Controller Memory Buffer Support 00:07:43.033 ================================ 00:07:43.033 Supported: No 00:07:43.033 00:07:43.033 Persistent Memory Region Support 00:07:43.033 ================================ 00:07:43.033 Supported: No 00:07:43.033 00:07:43.033 Admin Command Set Attributes 00:07:43.033 ============================ 00:07:43.033 Security Send/Receive: Not Supported 00:07:43.033 Format NVM: Supported 00:07:43.033 Firmware Activate/Download: Not Supported 00:07:43.033 Namespace Management: Supported 00:07:43.033 Device Self-Test: Not Supported 00:07:43.033 Directives: Supported 00:07:43.033 NVMe-MI: Not Supported 00:07:43.033 Virtualization Management: Not Supported 00:07:43.033 Doorbell Buffer Config: Supported 00:07:43.033 Get LBA Status Capability: Not Supported 00:07:43.033 Command & Feature Lockdown Capability: Not Supported 00:07:43.033 Abort Command Limit: 4 00:07:43.033 Async Event Request Limit: 4 00:07:43.033 Number of Firmware Slots: N/A 00:07:43.033 Firmware Slot 1 Read-Only: N/A 00:07:43.033 Firmware Activation Without Reset: N/A 00:07:43.033 Multiple Update Detection Support: N/A 00:07:43.033 Firmware Update Granularity: No Information Provided 00:07:43.033 Per-Namespace SMART Log: Yes 00:07:43.033 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.033 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:43.033 Command Effects Log Page: Supported 00:07:43.033 Get Log Page Extended Data: Supported 00:07:43.033 Telemetry Log Pages: Not Supported 00:07:43.033 Persistent Event Log Pages: Not Supported 00:07:43.033 Supported Log Pages Log Page: May Support 00:07:43.033 Commands Supported & Effects Log Page: Not Supported 00:07:43.033 Feature Identifiers & Effects Log Page:May Support 00:07:43.033 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.033 Data Area 4 for Telemetry Log: Not Supported 00:07:43.033 Error Log Page Entries Supported: 1 00:07:43.033 Keep Alive: Not Supported 00:07:43.033 00:07:43.033 NVM Command Set Attributes 00:07:43.033 ========================== 00:07:43.033 Submission Queue Entry Size 00:07:43.033 Max: 64 00:07:43.033 Min: 64 00:07:43.033 Completion Queue Entry Size 00:07:43.033 Max: 16 00:07:43.033 Min: 16 00:07:43.033 Number of Namespaces: 256 00:07:43.033 Compare Command: Supported 00:07:43.033 Write Uncorrectable Command: Not Supported 00:07:43.033 Dataset Management Command: Supported 00:07:43.033 Write Zeroes Command: Supported 00:07:43.033 Set Features Save Field: Supported 00:07:43.033 Reservations: Not Supported 00:07:43.033 Timestamp: Supported 00:07:43.033 Copy: Supported 00:07:43.033 Volatile Write Cache: Present 00:07:43.033 Atomic Write Unit (Normal): 1 00:07:43.033 Atomic Write Unit (PFail): 1 00:07:43.033 Atomic Compare & Write Unit: 1 00:07:43.033 Fused Compare & Write: Not Supported 00:07:43.033 Scatter-Gather List 00:07:43.033 SGL Command Set: Supported 00:07:43.033 SGL Keyed: Not Supported 00:07:43.033 SGL Bit Bucket Descriptor: Not Supported 00:07:43.033 SGL Metadata Pointer: Not Supported 00:07:43.033 Oversized SGL: Not Supported 00:07:43.033 SGL Metadata Address: Not Supported 00:07:43.033 SGL Offset: Not Supported 00:07:43.033 Transport SGL Data Block: Not Supported 00:07:43.033 Replay Protected Memory Block: Not Supported 00:07:43.033 00:07:43.033 Firmware Slot Information 00:07:43.033 ========================= 00:07:43.033 Active slot: 1 00:07:43.033 Slot 1 Firmware Revision: 1.0 00:07:43.033 00:07:43.033 00:07:43.033 Commands Supported and Effects 00:07:43.033 ============================== 00:07:43.033 Admin Commands 00:07:43.033 -------------- 00:07:43.033 Delete I/O Submission Queue (00h): Supported 00:07:43.033 Create I/O Submission Queue (01h): Supported 00:07:43.033 Get Log Page (02h): Supported 00:07:43.033 Delete I/O Completion Queue (04h): Supported 00:07:43.033 Create I/O Completion Queue (05h): Supported 00:07:43.033 Identify (06h): Supported 00:07:43.033 Abort (08h): Supported 00:07:43.033 Set Features (09h): Supported 00:07:43.033 Get Features (0Ah): Supported 00:07:43.033 Asynchronous Event Request (0Ch): Supported 00:07:43.033 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.033 Directive Send (19h): Supported 00:07:43.033 Directive Receive (1Ah): Supported 00:07:43.033 Virtualization Management (1Ch): Supported 00:07:43.033 Doorbell Buffer Config (7Ch): Supported 00:07:43.033 Format NVM (80h): Supported LBA-Change 00:07:43.033 I/O Commands 00:07:43.033 ------------ 00:07:43.033 Flush (00h): Supported LBA-Change 00:07:43.033 Write (01h): Supported LBA-Change 00:07:43.033 Read (02h): Supported 00:07:43.033 Compare (05h): Supported 00:07:43.033 Write Zeroes (08h): Supported LBA-Change 00:07:43.033 Dataset Management (09h): Supported LBA-Change 00:07:43.033 Unknown (0Ch): Supported 00:07:43.033 Unknown (12h): Supported 00:07:43.033 Copy (19h): Supported LBA-Change 00:07:43.033 Unknown (1Dh): Supported LBA-Change 00:07:43.033 00:07:43.033 Error Log 00:07:43.033 ========= 00:07:43.033 00:07:43.033 Arbitration 00:07:43.033 =========== 00:07:43.033 Arbitration Burst: no limit 00:07:43.033 00:07:43.033 Power Management 00:07:43.033 ================ 00:07:43.033 Number of Power States: 1 00:07:43.033 Current Power State: Power State #0 00:07:43.033 Power State #0: 00:07:43.033 Max Power: 25.00 W 00:07:43.033 Non-Operational State: Operational 00:07:43.033 Entry Latency: 16 microseconds 00:07:43.033 Exit Latency: 4 microseconds 00:07:43.033 Relative Read Throughput: 0 00:07:43.033 Relative Read Latency: 0 00:07:43.033 Relative Write Throughput: 0 00:07:43.033 Relative Write Latency: 0 00:07:43.033 Idle Power: Not Reported 00:07:43.033 Active Power: Not Reported 00:07:43.033 Non-Operational Permissive Mode: Not Supported 00:07:43.033 00:07:43.033 Health Information 00:07:43.033 ================== 00:07:43.033 Critical Warnings: 00:07:43.033 Available Spare Space: OK 00:07:43.033 Temperature: OK 00:07:43.033 Device Reliability: OK 00:07:43.033 Read Only: No 00:07:43.033 Volatile Memory Backup: OK 00:07:43.033 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.033 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.033 Available Spare: 0% 00:07:43.033 Available Spare Threshold: 0% 00:07:43.033 Life Percentage Used: 0% 00:07:43.033 Data Units Read: 1021 00:07:43.033 Data Units Written: 886 00:07:43.033 Host Read Commands: 53940 00:07:43.033 Host Write Commands: 52711 00:07:43.033 Controller Busy Time: 0 minutes 00:07:43.033 Power Cycles: 0 00:07:43.033 Power On Hours: 0 hours 00:07:43.033 Unsafe Shutdowns: 0 00:07:43.033 Unrecoverable Media Errors: 0 00:07:43.033 Lifetime Error Log Entries: 0 00:07:43.033 Warning Temperature Time: 0 minutes 00:07:43.033 Critical Temperature Time: 0 minutes 00:07:43.033 00:07:43.033 Number of Queues 00:07:43.033 ================ 00:07:43.033 Number of I/O Submission Queues: 64 00:07:43.033 Number of I/O Completion Queues: 64 00:07:43.033 00:07:43.033 ZNS Specific Controller Data 00:07:43.033 ============================ 00:07:43.033 Zone Append Size Limit: 0 00:07:43.033 00:07:43.033 00:07:43.033 Active Namespaces 00:07:43.033 ================= 00:07:43.033 Namespace ID:1 00:07:43.033 Error Recovery Timeout: Unlimited 00:07:43.033 Command Set Identifier: NVM (00h) 00:07:43.033 Deallocate: Supported 00:07:43.033 Deallocated/Unwritten Error: Supported 00:07:43.033 Deallocated Read Value: All 0x00 00:07:43.033 Deallocate in Write Zeroes: Not Supported 00:07:43.033 Deallocated Guard Field: 0xFFFF 00:07:43.033 Flush: Supported 00:07:43.033 Reservation: Not Supported 00:07:43.033 Namespace Sharing Capabilities: Private 00:07:43.033 Size (in LBAs): 1310720 (5GiB) 00:07:43.033 Capacity (in LBAs): 1310720 (5GiB) 00:07:43.033 Utilization (in LBAs): 1310720 (5GiB) 00:07:43.033 Thin Provisioning: Not Supported 00:07:43.033 Per-NS Atomic Units: No 00:07:43.033 Maximum Single Source Range Length: 128 00:07:43.033 Maximum Copy Length: 128 00:07:43.033 Maximum Source Range Count: 128 00:07:43.033 NGUID/EUI64 Never Reused: No 00:07:43.033 Namespace Write Protected: No 00:07:43.033 Number of LBA Formats: 8 00:07:43.033 Current LBA Format: LBA Format #04 00:07:43.033 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.033 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.033 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.033 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.033 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.034 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.034 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.034 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.034 00:07:43.034 NVM Specific Namespace Data 00:07:43.034 =========================== 00:07:43.034 Logical Block Storage Tag Mask: 0 00:07:43.034 Protection Information Capabilities: 00:07:43.034 16b Guard Protection Information Storage Tag Support: No 00:07:43.034 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.034 Storage Tag Check Read Support: No 00:07:43.034 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.034 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.034 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.034 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.034 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.034 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.034 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.034 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.034 11:22:28 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:43.034 11:22:28 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:43.293 ===================================================== 00:07:43.293 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:43.293 ===================================================== 00:07:43.293 Controller Capabilities/Features 00:07:43.293 ================================ 00:07:43.294 Vendor ID: 1b36 00:07:43.294 Subsystem Vendor ID: 1af4 00:07:43.294 Serial Number: 12342 00:07:43.294 Model Number: QEMU NVMe Ctrl 00:07:43.294 Firmware Version: 8.0.0 00:07:43.294 Recommended Arb Burst: 6 00:07:43.294 IEEE OUI Identifier: 00 54 52 00:07:43.294 Multi-path I/O 00:07:43.294 May have multiple subsystem ports: No 00:07:43.294 May have multiple controllers: No 00:07:43.294 Associated with SR-IOV VF: No 00:07:43.294 Max Data Transfer Size: 524288 00:07:43.294 Max Number of Namespaces: 256 00:07:43.294 Max Number of I/O Queues: 64 00:07:43.294 NVMe Specification Version (VS): 1.4 00:07:43.294 NVMe Specification Version (Identify): 1.4 00:07:43.294 Maximum Queue Entries: 2048 00:07:43.294 Contiguous Queues Required: Yes 00:07:43.294 Arbitration Mechanisms Supported 00:07:43.294 Weighted Round Robin: Not Supported 00:07:43.294 Vendor Specific: Not Supported 00:07:43.294 Reset Timeout: 7500 ms 00:07:43.294 Doorbell Stride: 4 bytes 00:07:43.294 NVM Subsystem Reset: Not Supported 00:07:43.294 Command Sets Supported 00:07:43.294 NVM Command Set: Supported 00:07:43.294 Boot Partition: Not Supported 00:07:43.294 Memory Page Size Minimum: 4096 bytes 00:07:43.294 Memory Page Size Maximum: 65536 bytes 00:07:43.294 Persistent Memory Region: Not Supported 00:07:43.294 Optional Asynchronous Events Supported 00:07:43.294 Namespace Attribute Notices: Supported 00:07:43.294 Firmware Activation Notices: Not Supported 00:07:43.294 ANA Change Notices: Not Supported 00:07:43.294 PLE Aggregate Log Change Notices: Not Supported 00:07:43.294 LBA Status Info Alert Notices: Not Supported 00:07:43.294 EGE Aggregate Log Change Notices: Not Supported 00:07:43.294 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.294 Zone Descriptor Change Notices: Not Supported 00:07:43.294 Discovery Log Change Notices: Not Supported 00:07:43.294 Controller Attributes 00:07:43.294 128-bit Host Identifier: Not Supported 00:07:43.294 Non-Operational Permissive Mode: Not Supported 00:07:43.294 NVM Sets: Not Supported 00:07:43.294 Read Recovery Levels: Not Supported 00:07:43.294 Endurance Groups: Not Supported 00:07:43.294 Predictable Latency Mode: Not Supported 00:07:43.294 Traffic Based Keep ALive: Not Supported 00:07:43.294 Namespace Granularity: Not Supported 00:07:43.294 SQ Associations: Not Supported 00:07:43.294 UUID List: Not Supported 00:07:43.294 Multi-Domain Subsystem: Not Supported 00:07:43.294 Fixed Capacity Management: Not Supported 00:07:43.294 Variable Capacity Management: Not Supported 00:07:43.294 Delete Endurance Group: Not Supported 00:07:43.294 Delete NVM Set: Not Supported 00:07:43.294 Extended LBA Formats Supported: Supported 00:07:43.294 Flexible Data Placement Supported: Not Supported 00:07:43.294 00:07:43.294 Controller Memory Buffer Support 00:07:43.294 ================================ 00:07:43.294 Supported: No 00:07:43.294 00:07:43.294 Persistent Memory Region Support 00:07:43.294 ================================ 00:07:43.294 Supported: No 00:07:43.294 00:07:43.294 Admin Command Set Attributes 00:07:43.294 ============================ 00:07:43.294 Security Send/Receive: Not Supported 00:07:43.294 Format NVM: Supported 00:07:43.294 Firmware Activate/Download: Not Supported 00:07:43.294 Namespace Management: Supported 00:07:43.294 Device Self-Test: Not Supported 00:07:43.294 Directives: Supported 00:07:43.294 NVMe-MI: Not Supported 00:07:43.294 Virtualization Management: Not Supported 00:07:43.294 Doorbell Buffer Config: Supported 00:07:43.294 Get LBA Status Capability: Not Supported 00:07:43.294 Command & Feature Lockdown Capability: Not Supported 00:07:43.294 Abort Command Limit: 4 00:07:43.294 Async Event Request Limit: 4 00:07:43.294 Number of Firmware Slots: N/A 00:07:43.294 Firmware Slot 1 Read-Only: N/A 00:07:43.294 Firmware Activation Without Reset: N/A 00:07:43.294 Multiple Update Detection Support: N/A 00:07:43.294 Firmware Update Granularity: No Information Provided 00:07:43.294 Per-Namespace SMART Log: Yes 00:07:43.294 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.294 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:43.294 Command Effects Log Page: Supported 00:07:43.294 Get Log Page Extended Data: Supported 00:07:43.294 Telemetry Log Pages: Not Supported 00:07:43.294 Persistent Event Log Pages: Not Supported 00:07:43.294 Supported Log Pages Log Page: May Support 00:07:43.294 Commands Supported & Effects Log Page: Not Supported 00:07:43.294 Feature Identifiers & Effects Log Page:May Support 00:07:43.294 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.294 Data Area 4 for Telemetry Log: Not Supported 00:07:43.294 Error Log Page Entries Supported: 1 00:07:43.294 Keep Alive: Not Supported 00:07:43.294 00:07:43.294 NVM Command Set Attributes 00:07:43.294 ========================== 00:07:43.294 Submission Queue Entry Size 00:07:43.294 Max: 64 00:07:43.294 Min: 64 00:07:43.294 Completion Queue Entry Size 00:07:43.294 Max: 16 00:07:43.294 Min: 16 00:07:43.294 Number of Namespaces: 256 00:07:43.294 Compare Command: Supported 00:07:43.294 Write Uncorrectable Command: Not Supported 00:07:43.294 Dataset Management Command: Supported 00:07:43.294 Write Zeroes Command: Supported 00:07:43.294 Set Features Save Field: Supported 00:07:43.294 Reservations: Not Supported 00:07:43.294 Timestamp: Supported 00:07:43.294 Copy: Supported 00:07:43.294 Volatile Write Cache: Present 00:07:43.294 Atomic Write Unit (Normal): 1 00:07:43.294 Atomic Write Unit (PFail): 1 00:07:43.294 Atomic Compare & Write Unit: 1 00:07:43.294 Fused Compare & Write: Not Supported 00:07:43.294 Scatter-Gather List 00:07:43.294 SGL Command Set: Supported 00:07:43.294 SGL Keyed: Not Supported 00:07:43.294 SGL Bit Bucket Descriptor: Not Supported 00:07:43.294 SGL Metadata Pointer: Not Supported 00:07:43.294 Oversized SGL: Not Supported 00:07:43.294 SGL Metadata Address: Not Supported 00:07:43.294 SGL Offset: Not Supported 00:07:43.294 Transport SGL Data Block: Not Supported 00:07:43.294 Replay Protected Memory Block: Not Supported 00:07:43.294 00:07:43.294 Firmware Slot Information 00:07:43.294 ========================= 00:07:43.294 Active slot: 1 00:07:43.294 Slot 1 Firmware Revision: 1.0 00:07:43.294 00:07:43.294 00:07:43.294 Commands Supported and Effects 00:07:43.294 ============================== 00:07:43.294 Admin Commands 00:07:43.294 -------------- 00:07:43.294 Delete I/O Submission Queue (00h): Supported 00:07:43.294 Create I/O Submission Queue (01h): Supported 00:07:43.294 Get Log Page (02h): Supported 00:07:43.294 Delete I/O Completion Queue (04h): Supported 00:07:43.294 Create I/O Completion Queue (05h): Supported 00:07:43.294 Identify (06h): Supported 00:07:43.294 Abort (08h): Supported 00:07:43.294 Set Features (09h): Supported 00:07:43.294 Get Features (0Ah): Supported 00:07:43.294 Asynchronous Event Request (0Ch): Supported 00:07:43.294 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.294 Directive Send (19h): Supported 00:07:43.294 Directive Receive (1Ah): Supported 00:07:43.294 Virtualization Management (1Ch): Supported 00:07:43.294 Doorbell Buffer Config (7Ch): Supported 00:07:43.294 Format NVM (80h): Supported LBA-Change 00:07:43.294 I/O Commands 00:07:43.294 ------------ 00:07:43.294 Flush (00h): Supported LBA-Change 00:07:43.294 Write (01h): Supported LBA-Change 00:07:43.294 Read (02h): Supported 00:07:43.294 Compare (05h): Supported 00:07:43.294 Write Zeroes (08h): Supported LBA-Change 00:07:43.294 Dataset Management (09h): Supported LBA-Change 00:07:43.294 Unknown (0Ch): Supported 00:07:43.294 Unknown (12h): Supported 00:07:43.294 Copy (19h): Supported LBA-Change 00:07:43.294 Unknown (1Dh): Supported LBA-Change 00:07:43.294 00:07:43.294 Error Log 00:07:43.294 ========= 00:07:43.294 00:07:43.294 Arbitration 00:07:43.294 =========== 00:07:43.294 Arbitration Burst: no limit 00:07:43.294 00:07:43.294 Power Management 00:07:43.294 ================ 00:07:43.294 Number of Power States: 1 00:07:43.294 Current Power State: Power State #0 00:07:43.294 Power State #0: 00:07:43.294 Max Power: 25.00 W 00:07:43.294 Non-Operational State: Operational 00:07:43.294 Entry Latency: 16 microseconds 00:07:43.294 Exit Latency: 4 microseconds 00:07:43.294 Relative Read Throughput: 0 00:07:43.294 Relative Read Latency: 0 00:07:43.294 Relative Write Throughput: 0 00:07:43.294 Relative Write Latency: 0 00:07:43.294 Idle Power: Not Reported 00:07:43.294 Active Power: Not Reported 00:07:43.294 Non-Operational Permissive Mode: Not Supported 00:07:43.294 00:07:43.294 Health Information 00:07:43.294 ================== 00:07:43.294 Critical Warnings: 00:07:43.294 Available Spare Space: OK 00:07:43.294 Temperature: OK 00:07:43.294 Device Reliability: OK 00:07:43.294 Read Only: No 00:07:43.294 Volatile Memory Backup: OK 00:07:43.294 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.294 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.294 Available Spare: 0% 00:07:43.294 Available Spare Threshold: 0% 00:07:43.295 Life Percentage Used: 0% 00:07:43.295 Data Units Read: 2206 00:07:43.295 Data Units Written: 1993 00:07:43.295 Host Read Commands: 114012 00:07:43.295 Host Write Commands: 112281 00:07:43.295 Controller Busy Time: 0 minutes 00:07:43.295 Power Cycles: 0 00:07:43.295 Power On Hours: 0 hours 00:07:43.295 Unsafe Shutdowns: 0 00:07:43.295 Unrecoverable Media Errors: 0 00:07:43.295 Lifetime Error Log Entries: 0 00:07:43.295 Warning Temperature Time: 0 minutes 00:07:43.295 Critical Temperature Time: 0 minutes 00:07:43.295 00:07:43.295 Number of Queues 00:07:43.295 ================ 00:07:43.295 Number of I/O Submission Queues: 64 00:07:43.295 Number of I/O Completion Queues: 64 00:07:43.295 00:07:43.295 ZNS Specific Controller Data 00:07:43.295 ============================ 00:07:43.295 Zone Append Size Limit: 0 00:07:43.295 00:07:43.295 00:07:43.295 Active Namespaces 00:07:43.295 ================= 00:07:43.295 Namespace ID:1 00:07:43.295 Error Recovery Timeout: Unlimited 00:07:43.295 Command Set Identifier: NVM (00h) 00:07:43.295 Deallocate: Supported 00:07:43.295 Deallocated/Unwritten Error: Supported 00:07:43.295 Deallocated Read Value: All 0x00 00:07:43.295 Deallocate in Write Zeroes: Not Supported 00:07:43.295 Deallocated Guard Field: 0xFFFF 00:07:43.295 Flush: Supported 00:07:43.295 Reservation: Not Supported 00:07:43.295 Namespace Sharing Capabilities: Private 00:07:43.295 Size (in LBAs): 1048576 (4GiB) 00:07:43.295 Capacity (in LBAs): 1048576 (4GiB) 00:07:43.295 Utilization (in LBAs): 1048576 (4GiB) 00:07:43.295 Thin Provisioning: Not Supported 00:07:43.295 Per-NS Atomic Units: No 00:07:43.295 Maximum Single Source Range Length: 128 00:07:43.295 Maximum Copy Length: 128 00:07:43.295 Maximum Source Range Count: 128 00:07:43.295 NGUID/EUI64 Never Reused: No 00:07:43.295 Namespace Write Protected: No 00:07:43.295 Number of LBA Formats: 8 00:07:43.295 Current LBA Format: LBA Format #04 00:07:43.295 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.295 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.295 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.295 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.295 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.295 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.295 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.295 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.295 00:07:43.295 NVM Specific Namespace Data 00:07:43.295 =========================== 00:07:43.295 Logical Block Storage Tag Mask: 0 00:07:43.295 Protection Information Capabilities: 00:07:43.295 16b Guard Protection Information Storage Tag Support: No 00:07:43.295 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.295 Storage Tag Check Read Support: No 00:07:43.295 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Namespace ID:2 00:07:43.295 Error Recovery Timeout: Unlimited 00:07:43.295 Command Set Identifier: NVM (00h) 00:07:43.295 Deallocate: Supported 00:07:43.295 Deallocated/Unwritten Error: Supported 00:07:43.295 Deallocated Read Value: All 0x00 00:07:43.295 Deallocate in Write Zeroes: Not Supported 00:07:43.295 Deallocated Guard Field: 0xFFFF 00:07:43.295 Flush: Supported 00:07:43.295 Reservation: Not Supported 00:07:43.295 Namespace Sharing Capabilities: Private 00:07:43.295 Size (in LBAs): 1048576 (4GiB) 00:07:43.295 Capacity (in LBAs): 1048576 (4GiB) 00:07:43.295 Utilization (in LBAs): 1048576 (4GiB) 00:07:43.295 Thin Provisioning: Not Supported 00:07:43.295 Per-NS Atomic Units: No 00:07:43.295 Maximum Single Source Range Length: 128 00:07:43.295 Maximum Copy Length: 128 00:07:43.295 Maximum Source Range Count: 128 00:07:43.295 NGUID/EUI64 Never Reused: No 00:07:43.295 Namespace Write Protected: No 00:07:43.295 Number of LBA Formats: 8 00:07:43.295 Current LBA Format: LBA Format #04 00:07:43.295 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.295 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.295 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.295 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.295 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.295 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.295 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.295 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.295 00:07:43.295 NVM Specific Namespace Data 00:07:43.295 =========================== 00:07:43.295 Logical Block Storage Tag Mask: 0 00:07:43.295 Protection Information Capabilities: 00:07:43.295 16b Guard Protection Information Storage Tag Support: No 00:07:43.295 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.295 Storage Tag Check Read Support: No 00:07:43.295 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Namespace ID:3 00:07:43.295 Error Recovery Timeout: Unlimited 00:07:43.295 Command Set Identifier: NVM (00h) 00:07:43.295 Deallocate: Supported 00:07:43.295 Deallocated/Unwritten Error: Supported 00:07:43.295 Deallocated Read Value: All 0x00 00:07:43.295 Deallocate in Write Zeroes: Not Supported 00:07:43.295 Deallocated Guard Field: 0xFFFF 00:07:43.295 Flush: Supported 00:07:43.295 Reservation: Not Supported 00:07:43.295 Namespace Sharing Capabilities: Private 00:07:43.295 Size (in LBAs): 1048576 (4GiB) 00:07:43.295 Capacity (in LBAs): 1048576 (4GiB) 00:07:43.295 Utilization (in LBAs): 1048576 (4GiB) 00:07:43.295 Thin Provisioning: Not Supported 00:07:43.295 Per-NS Atomic Units: No 00:07:43.295 Maximum Single Source Range Length: 128 00:07:43.295 Maximum Copy Length: 128 00:07:43.295 Maximum Source Range Count: 128 00:07:43.295 NGUID/EUI64 Never Reused: No 00:07:43.295 Namespace Write Protected: No 00:07:43.295 Number of LBA Formats: 8 00:07:43.295 Current LBA Format: LBA Format #04 00:07:43.295 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.295 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.295 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.295 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.295 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.295 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.295 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.295 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.295 00:07:43.295 NVM Specific Namespace Data 00:07:43.295 =========================== 00:07:43.295 Logical Block Storage Tag Mask: 0 00:07:43.295 Protection Information Capabilities: 00:07:43.295 16b Guard Protection Information Storage Tag Support: No 00:07:43.295 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.295 Storage Tag Check Read Support: No 00:07:43.295 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.295 11:22:28 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:43.295 11:22:28 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:43.558 ===================================================== 00:07:43.558 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:43.558 ===================================================== 00:07:43.558 Controller Capabilities/Features 00:07:43.558 ================================ 00:07:43.558 Vendor ID: 1b36 00:07:43.558 Subsystem Vendor ID: 1af4 00:07:43.558 Serial Number: 12343 00:07:43.558 Model Number: QEMU NVMe Ctrl 00:07:43.558 Firmware Version: 8.0.0 00:07:43.558 Recommended Arb Burst: 6 00:07:43.558 IEEE OUI Identifier: 00 54 52 00:07:43.558 Multi-path I/O 00:07:43.558 May have multiple subsystem ports: No 00:07:43.558 May have multiple controllers: Yes 00:07:43.558 Associated with SR-IOV VF: No 00:07:43.558 Max Data Transfer Size: 524288 00:07:43.558 Max Number of Namespaces: 256 00:07:43.558 Max Number of I/O Queues: 64 00:07:43.558 NVMe Specification Version (VS): 1.4 00:07:43.558 NVMe Specification Version (Identify): 1.4 00:07:43.558 Maximum Queue Entries: 2048 00:07:43.558 Contiguous Queues Required: Yes 00:07:43.558 Arbitration Mechanisms Supported 00:07:43.558 Weighted Round Robin: Not Supported 00:07:43.558 Vendor Specific: Not Supported 00:07:43.558 Reset Timeout: 7500 ms 00:07:43.558 Doorbell Stride: 4 bytes 00:07:43.558 NVM Subsystem Reset: Not Supported 00:07:43.558 Command Sets Supported 00:07:43.558 NVM Command Set: Supported 00:07:43.558 Boot Partition: Not Supported 00:07:43.558 Memory Page Size Minimum: 4096 bytes 00:07:43.558 Memory Page Size Maximum: 65536 bytes 00:07:43.558 Persistent Memory Region: Not Supported 00:07:43.558 Optional Asynchronous Events Supported 00:07:43.558 Namespace Attribute Notices: Supported 00:07:43.558 Firmware Activation Notices: Not Supported 00:07:43.558 ANA Change Notices: Not Supported 00:07:43.558 PLE Aggregate Log Change Notices: Not Supported 00:07:43.558 LBA Status Info Alert Notices: Not Supported 00:07:43.558 EGE Aggregate Log Change Notices: Not Supported 00:07:43.558 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.558 Zone Descriptor Change Notices: Not Supported 00:07:43.558 Discovery Log Change Notices: Not Supported 00:07:43.558 Controller Attributes 00:07:43.558 128-bit Host Identifier: Not Supported 00:07:43.558 Non-Operational Permissive Mode: Not Supported 00:07:43.558 NVM Sets: Not Supported 00:07:43.558 Read Recovery Levels: Not Supported 00:07:43.558 Endurance Groups: Supported 00:07:43.558 Predictable Latency Mode: Not Supported 00:07:43.558 Traffic Based Keep ALive: Not Supported 00:07:43.558 Namespace Granularity: Not Supported 00:07:43.558 SQ Associations: Not Supported 00:07:43.558 UUID List: Not Supported 00:07:43.558 Multi-Domain Subsystem: Not Supported 00:07:43.558 Fixed Capacity Management: Not Supported 00:07:43.558 Variable Capacity Management: Not Supported 00:07:43.558 Delete Endurance Group: Not Supported 00:07:43.558 Delete NVM Set: Not Supported 00:07:43.558 Extended LBA Formats Supported: Supported 00:07:43.558 Flexible Data Placement Supported: Supported 00:07:43.558 00:07:43.558 Controller Memory Buffer Support 00:07:43.558 ================================ 00:07:43.558 Supported: No 00:07:43.558 00:07:43.558 Persistent Memory Region Support 00:07:43.558 ================================ 00:07:43.558 Supported: No 00:07:43.558 00:07:43.558 Admin Command Set Attributes 00:07:43.558 ============================ 00:07:43.558 Security Send/Receive: Not Supported 00:07:43.558 Format NVM: Supported 00:07:43.558 Firmware Activate/Download: Not Supported 00:07:43.558 Namespace Management: Supported 00:07:43.558 Device Self-Test: Not Supported 00:07:43.558 Directives: Supported 00:07:43.558 NVMe-MI: Not Supported 00:07:43.558 Virtualization Management: Not Supported 00:07:43.558 Doorbell Buffer Config: Supported 00:07:43.558 Get LBA Status Capability: Not Supported 00:07:43.558 Command & Feature Lockdown Capability: Not Supported 00:07:43.558 Abort Command Limit: 4 00:07:43.558 Async Event Request Limit: 4 00:07:43.558 Number of Firmware Slots: N/A 00:07:43.558 Firmware Slot 1 Read-Only: N/A 00:07:43.558 Firmware Activation Without Reset: N/A 00:07:43.558 Multiple Update Detection Support: N/A 00:07:43.558 Firmware Update Granularity: No Information Provided 00:07:43.558 Per-Namespace SMART Log: Yes 00:07:43.558 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.558 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:43.558 Command Effects Log Page: Supported 00:07:43.558 Get Log Page Extended Data: Supported 00:07:43.558 Telemetry Log Pages: Not Supported 00:07:43.558 Persistent Event Log Pages: Not Supported 00:07:43.558 Supported Log Pages Log Page: May Support 00:07:43.558 Commands Supported & Effects Log Page: Not Supported 00:07:43.558 Feature Identifiers & Effects Log Page:May Support 00:07:43.558 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.558 Data Area 4 for Telemetry Log: Not Supported 00:07:43.558 Error Log Page Entries Supported: 1 00:07:43.558 Keep Alive: Not Supported 00:07:43.558 00:07:43.558 NVM Command Set Attributes 00:07:43.558 ========================== 00:07:43.558 Submission Queue Entry Size 00:07:43.558 Max: 64 00:07:43.558 Min: 64 00:07:43.558 Completion Queue Entry Size 00:07:43.558 Max: 16 00:07:43.558 Min: 16 00:07:43.558 Number of Namespaces: 256 00:07:43.558 Compare Command: Supported 00:07:43.558 Write Uncorrectable Command: Not Supported 00:07:43.558 Dataset Management Command: Supported 00:07:43.558 Write Zeroes Command: Supported 00:07:43.558 Set Features Save Field: Supported 00:07:43.558 Reservations: Not Supported 00:07:43.558 Timestamp: Supported 00:07:43.558 Copy: Supported 00:07:43.558 Volatile Write Cache: Present 00:07:43.558 Atomic Write Unit (Normal): 1 00:07:43.558 Atomic Write Unit (PFail): 1 00:07:43.558 Atomic Compare & Write Unit: 1 00:07:43.558 Fused Compare & Write: Not Supported 00:07:43.558 Scatter-Gather List 00:07:43.558 SGL Command Set: Supported 00:07:43.558 SGL Keyed: Not Supported 00:07:43.558 SGL Bit Bucket Descriptor: Not Supported 00:07:43.558 SGL Metadata Pointer: Not Supported 00:07:43.558 Oversized SGL: Not Supported 00:07:43.558 SGL Metadata Address: Not Supported 00:07:43.558 SGL Offset: Not Supported 00:07:43.558 Transport SGL Data Block: Not Supported 00:07:43.558 Replay Protected Memory Block: Not Supported 00:07:43.558 00:07:43.558 Firmware Slot Information 00:07:43.558 ========================= 00:07:43.558 Active slot: 1 00:07:43.558 Slot 1 Firmware Revision: 1.0 00:07:43.558 00:07:43.558 00:07:43.558 Commands Supported and Effects 00:07:43.558 ============================== 00:07:43.558 Admin Commands 00:07:43.558 -------------- 00:07:43.558 Delete I/O Submission Queue (00h): Supported 00:07:43.558 Create I/O Submission Queue (01h): Supported 00:07:43.558 Get Log Page (02h): Supported 00:07:43.558 Delete I/O Completion Queue (04h): Supported 00:07:43.558 Create I/O Completion Queue (05h): Supported 00:07:43.558 Identify (06h): Supported 00:07:43.558 Abort (08h): Supported 00:07:43.558 Set Features (09h): Supported 00:07:43.558 Get Features (0Ah): Supported 00:07:43.558 Asynchronous Event Request (0Ch): Supported 00:07:43.558 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.558 Directive Send (19h): Supported 00:07:43.558 Directive Receive (1Ah): Supported 00:07:43.558 Virtualization Management (1Ch): Supported 00:07:43.558 Doorbell Buffer Config (7Ch): Supported 00:07:43.558 Format NVM (80h): Supported LBA-Change 00:07:43.558 I/O Commands 00:07:43.558 ------------ 00:07:43.558 Flush (00h): Supported LBA-Change 00:07:43.558 Write (01h): Supported LBA-Change 00:07:43.558 Read (02h): Supported 00:07:43.558 Compare (05h): Supported 00:07:43.558 Write Zeroes (08h): Supported LBA-Change 00:07:43.558 Dataset Management (09h): Supported LBA-Change 00:07:43.558 Unknown (0Ch): Supported 00:07:43.559 Unknown (12h): Supported 00:07:43.559 Copy (19h): Supported LBA-Change 00:07:43.559 Unknown (1Dh): Supported LBA-Change 00:07:43.559 00:07:43.559 Error Log 00:07:43.559 ========= 00:07:43.559 00:07:43.559 Arbitration 00:07:43.559 =========== 00:07:43.559 Arbitration Burst: no limit 00:07:43.559 00:07:43.559 Power Management 00:07:43.559 ================ 00:07:43.559 Number of Power States: 1 00:07:43.559 Current Power State: Power State #0 00:07:43.559 Power State #0: 00:07:43.559 Max Power: 25.00 W 00:07:43.559 Non-Operational State: Operational 00:07:43.559 Entry Latency: 16 microseconds 00:07:43.559 Exit Latency: 4 microseconds 00:07:43.559 Relative Read Throughput: 0 00:07:43.559 Relative Read Latency: 0 00:07:43.559 Relative Write Throughput: 0 00:07:43.559 Relative Write Latency: 0 00:07:43.559 Idle Power: Not Reported 00:07:43.559 Active Power: Not Reported 00:07:43.559 Non-Operational Permissive Mode: Not Supported 00:07:43.559 00:07:43.559 Health Information 00:07:43.559 ================== 00:07:43.559 Critical Warnings: 00:07:43.559 Available Spare Space: OK 00:07:43.559 Temperature: OK 00:07:43.559 Device Reliability: OK 00:07:43.559 Read Only: No 00:07:43.559 Volatile Memory Backup: OK 00:07:43.559 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.559 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.559 Available Spare: 0% 00:07:43.559 Available Spare Threshold: 0% 00:07:43.559 Life Percentage Used: 0% 00:07:43.559 Data Units Read: 906 00:07:43.559 Data Units Written: 835 00:07:43.559 Host Read Commands: 39494 00:07:43.559 Host Write Commands: 38918 00:07:43.559 Controller Busy Time: 0 minutes 00:07:43.559 Power Cycles: 0 00:07:43.559 Power On Hours: 0 hours 00:07:43.559 Unsafe Shutdowns: 0 00:07:43.559 Unrecoverable Media Errors: 0 00:07:43.559 Lifetime Error Log Entries: 0 00:07:43.559 Warning Temperature Time: 0 minutes 00:07:43.559 Critical Temperature Time: 0 minutes 00:07:43.559 00:07:43.559 Number of Queues 00:07:43.559 ================ 00:07:43.559 Number of I/O Submission Queues: 64 00:07:43.559 Number of I/O Completion Queues: 64 00:07:43.559 00:07:43.559 ZNS Specific Controller Data 00:07:43.559 ============================ 00:07:43.559 Zone Append Size Limit: 0 00:07:43.559 00:07:43.559 00:07:43.559 Active Namespaces 00:07:43.559 ================= 00:07:43.559 Namespace ID:1 00:07:43.559 Error Recovery Timeout: Unlimited 00:07:43.559 Command Set Identifier: NVM (00h) 00:07:43.559 Deallocate: Supported 00:07:43.559 Deallocated/Unwritten Error: Supported 00:07:43.559 Deallocated Read Value: All 0x00 00:07:43.559 Deallocate in Write Zeroes: Not Supported 00:07:43.559 Deallocated Guard Field: 0xFFFF 00:07:43.559 Flush: Supported 00:07:43.559 Reservation: Not Supported 00:07:43.559 Namespace Sharing Capabilities: Multiple Controllers 00:07:43.559 Size (in LBAs): 262144 (1GiB) 00:07:43.559 Capacity (in LBAs): 262144 (1GiB) 00:07:43.559 Utilization (in LBAs): 262144 (1GiB) 00:07:43.559 Thin Provisioning: Not Supported 00:07:43.559 Per-NS Atomic Units: No 00:07:43.559 Maximum Single Source Range Length: 128 00:07:43.559 Maximum Copy Length: 128 00:07:43.559 Maximum Source Range Count: 128 00:07:43.559 NGUID/EUI64 Never Reused: No 00:07:43.559 Namespace Write Protected: No 00:07:43.559 Endurance group ID: 1 00:07:43.559 Number of LBA Formats: 8 00:07:43.559 Current LBA Format: LBA Format #04 00:07:43.559 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.559 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.559 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.559 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.559 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.559 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.559 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.559 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.559 00:07:43.559 Get Feature FDP: 00:07:43.559 ================ 00:07:43.559 Enabled: Yes 00:07:43.559 FDP configuration index: 0 00:07:43.559 00:07:43.559 FDP configurations log page 00:07:43.559 =========================== 00:07:43.559 Number of FDP configurations: 1 00:07:43.559 Version: 0 00:07:43.559 Size: 112 00:07:43.559 FDP Configuration Descriptor: 0 00:07:43.559 Descriptor Size: 96 00:07:43.559 Reclaim Group Identifier format: 2 00:07:43.559 FDP Volatile Write Cache: Not Present 00:07:43.559 FDP Configuration: Valid 00:07:43.559 Vendor Specific Size: 0 00:07:43.559 Number of Reclaim Groups: 2 00:07:43.559 Number of Recalim Unit Handles: 8 00:07:43.559 Max Placement Identifiers: 128 00:07:43.559 Number of Namespaces Suppprted: 256 00:07:43.559 Reclaim unit Nominal Size: 6000000 bytes 00:07:43.559 Estimated Reclaim Unit Time Limit: Not Reported 00:07:43.559 RUH Desc #000: RUH Type: Initially Isolated 00:07:43.559 RUH Desc #001: RUH Type: Initially Isolated 00:07:43.559 RUH Desc #002: RUH Type: Initially Isolated 00:07:43.559 RUH Desc #003: RUH Type: Initially Isolated 00:07:43.559 RUH Desc #004: RUH Type: Initially Isolated 00:07:43.559 RUH Desc #005: RUH Type: Initially Isolated 00:07:43.559 RUH Desc #006: RUH Type: Initially Isolated 00:07:43.559 RUH Desc #007: RUH Type: Initially Isolated 00:07:43.559 00:07:43.559 FDP reclaim unit handle usage log page 00:07:43.559 ====================================== 00:07:43.559 Number of Reclaim Unit Handles: 8 00:07:43.559 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:43.559 RUH Usage Desc #001: RUH Attributes: Unused 00:07:43.559 RUH Usage Desc #002: RUH Attributes: Unused 00:07:43.559 RUH Usage Desc #003: RUH Attributes: Unused 00:07:43.559 RUH Usage Desc #004: RUH Attributes: Unused 00:07:43.559 RUH Usage Desc #005: RUH Attributes: Unused 00:07:43.559 RUH Usage Desc #006: RUH Attributes: Unused 00:07:43.559 RUH Usage Desc #007: RUH Attributes: Unused 00:07:43.559 00:07:43.559 FDP statistics log page 00:07:43.559 ======================= 00:07:43.559 Host bytes with metadata written: 519217152 00:07:43.559 Media bytes with metadata written: 519274496 00:07:43.559 Media bytes erased: 0 00:07:43.559 00:07:43.559 FDP events log page 00:07:43.559 =================== 00:07:43.559 Number of FDP events: 0 00:07:43.559 00:07:43.559 NVM Specific Namespace Data 00:07:43.559 =========================== 00:07:43.559 Logical Block Storage Tag Mask: 0 00:07:43.559 Protection Information Capabilities: 00:07:43.559 16b Guard Protection Information Storage Tag Support: No 00:07:43.559 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.559 Storage Tag Check Read Support: No 00:07:43.559 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.559 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.559 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.559 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.559 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.559 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.559 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.559 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.559 00:07:43.559 real 0m1.169s 00:07:43.559 user 0m0.441s 00:07:43.559 sys 0m0.518s 00:07:43.559 ************************************ 00:07:43.559 END TEST nvme_identify 00:07:43.559 ************************************ 00:07:43.559 11:22:28 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.559 11:22:28 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:43.559 11:22:28 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:43.559 11:22:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.559 11:22:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.559 11:22:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.559 ************************************ 00:07:43.559 START TEST nvme_perf 00:07:43.559 ************************************ 00:07:43.559 11:22:28 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:43.559 11:22:28 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:44.949 Initializing NVMe Controllers 00:07:44.949 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:44.949 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:44.949 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:44.949 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:44.949 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:44.949 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:44.949 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:44.949 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:44.949 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:44.949 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:44.949 Initialization complete. Launching workers. 00:07:44.949 ======================================================== 00:07:44.949 Latency(us) 00:07:44.949 Device Information : IOPS MiB/s Average min max 00:07:44.949 PCIE (0000:00:10.0) NSID 1 from core 0: 17143.69 200.90 7475.68 5702.33 32223.35 00:07:44.949 PCIE (0000:00:11.0) NSID 1 from core 0: 17143.69 200.90 7465.51 5781.88 30468.13 00:07:44.949 PCIE (0000:00:13.0) NSID 1 from core 0: 17143.69 200.90 7454.24 5910.23 29223.78 00:07:44.949 PCIE (0000:00:12.0) NSID 1 from core 0: 17143.69 200.90 7442.79 5871.00 27571.62 00:07:44.949 PCIE (0000:00:12.0) NSID 2 from core 0: 17143.69 200.90 7431.12 5862.53 25909.28 00:07:44.949 PCIE (0000:00:12.0) NSID 3 from core 0: 17207.65 201.65 7392.10 5788.76 20413.15 00:07:44.949 ======================================================== 00:07:44.949 Total : 102926.08 1206.17 7443.54 5702.33 32223.35 00:07:44.949 00:07:44.949 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:44.949 ================================================================================= 00:07:44.949 1.00000% : 6074.683us 00:07:44.949 10.00000% : 6427.569us 00:07:44.949 25.00000% : 6654.425us 00:07:44.949 50.00000% : 7007.311us 00:07:44.949 75.00000% : 7410.609us 00:07:44.949 90.00000% : 8872.566us 00:07:44.949 95.00000% : 10838.646us 00:07:44.949 98.00000% : 11594.831us 00:07:44.949 99.00000% : 13006.375us 00:07:44.949 99.50000% : 26617.698us 00:07:44.949 99.90000% : 31860.578us 00:07:44.949 99.99000% : 32263.877us 00:07:44.949 99.99900% : 32263.877us 00:07:44.949 99.99990% : 32263.877us 00:07:44.949 99.99999% : 32263.877us 00:07:44.949 00:07:44.949 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:44.949 ================================================================================= 00:07:44.949 1.00000% : 6150.302us 00:07:44.949 10.00000% : 6503.188us 00:07:44.949 25.00000% : 6704.837us 00:07:44.949 50.00000% : 7007.311us 00:07:44.949 75.00000% : 7360.197us 00:07:44.949 90.00000% : 8922.978us 00:07:44.949 95.00000% : 10838.646us 00:07:44.949 98.00000% : 11695.655us 00:07:44.949 99.00000% : 13107.200us 00:07:44.949 99.50000% : 25306.978us 00:07:44.949 99.90000% : 30247.385us 00:07:44.949 99.99000% : 30449.034us 00:07:44.949 99.99900% : 30650.683us 00:07:44.949 99.99990% : 30650.683us 00:07:44.949 99.99999% : 30650.683us 00:07:44.949 00:07:44.949 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:44.949 ================================================================================= 00:07:44.949 1.00000% : 6125.095us 00:07:44.949 10.00000% : 6452.775us 00:07:44.949 25.00000% : 6704.837us 00:07:44.949 50.00000% : 7007.311us 00:07:44.949 75.00000% : 7360.197us 00:07:44.949 90.00000% : 9124.628us 00:07:44.949 95.00000% : 10586.585us 00:07:44.949 98.00000% : 11947.717us 00:07:44.949 99.00000% : 12905.551us 00:07:44.949 99.50000% : 24097.083us 00:07:44.949 99.90000% : 28835.840us 00:07:44.949 99.99000% : 29239.138us 00:07:44.949 99.99900% : 29239.138us 00:07:44.949 99.99990% : 29239.138us 00:07:44.949 99.99999% : 29239.138us 00:07:44.949 00:07:44.949 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:44.949 ================================================================================= 00:07:44.949 1.00000% : 6175.508us 00:07:44.949 10.00000% : 6503.188us 00:07:44.949 25.00000% : 6704.837us 00:07:44.949 50.00000% : 7007.311us 00:07:44.949 75.00000% : 7360.197us 00:07:44.949 90.00000% : 9124.628us 00:07:44.950 95.00000% : 10485.760us 00:07:44.950 98.00000% : 11998.129us 00:07:44.950 99.00000% : 12754.314us 00:07:44.950 99.50000% : 22383.065us 00:07:44.950 99.90000% : 27222.646us 00:07:44.950 99.99000% : 27625.945us 00:07:44.950 99.99900% : 27625.945us 00:07:44.950 99.99990% : 27625.945us 00:07:44.950 99.99999% : 27625.945us 00:07:44.950 00:07:44.950 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:44.950 ================================================================================= 00:07:44.950 1.00000% : 6150.302us 00:07:44.950 10.00000% : 6503.188us 00:07:44.950 25.00000% : 6704.837us 00:07:44.950 50.00000% : 7007.311us 00:07:44.950 75.00000% : 7360.197us 00:07:44.950 90.00000% : 9124.628us 00:07:44.950 95.00000% : 10536.172us 00:07:44.950 98.00000% : 11897.305us 00:07:44.950 99.00000% : 12754.314us 00:07:44.950 99.50000% : 20669.046us 00:07:44.950 99.90000% : 25508.628us 00:07:44.950 99.99000% : 26012.751us 00:07:44.950 99.99900% : 26012.751us 00:07:44.950 99.99990% : 26012.751us 00:07:44.950 99.99999% : 26012.751us 00:07:44.950 00:07:44.950 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:44.950 ================================================================================= 00:07:44.950 1.00000% : 6175.508us 00:07:44.950 10.00000% : 6503.188us 00:07:44.950 25.00000% : 6704.837us 00:07:44.950 50.00000% : 7007.311us 00:07:44.950 75.00000% : 7360.197us 00:07:44.950 90.00000% : 9023.803us 00:07:44.950 95.00000% : 10737.822us 00:07:44.950 98.00000% : 11746.068us 00:07:44.950 99.00000% : 13107.200us 00:07:44.950 99.50000% : 14821.218us 00:07:44.950 99.90000% : 20064.098us 00:07:44.950 99.99000% : 20467.397us 00:07:44.950 99.99900% : 20467.397us 00:07:44.950 99.99990% : 20467.397us 00:07:44.950 99.99999% : 20467.397us 00:07:44.950 00:07:44.950 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:44.950 ============================================================================== 00:07:44.950 Range in us Cumulative IO count 00:07:44.950 5696.591 - 5721.797: 0.0117% ( 2) 00:07:44.950 5721.797 - 5747.003: 0.0350% ( 4) 00:07:44.950 5772.209 - 5797.415: 0.0525% ( 3) 00:07:44.950 5797.415 - 5822.622: 0.0583% ( 1) 00:07:44.950 5822.622 - 5847.828: 0.0641% ( 1) 00:07:44.950 5847.828 - 5873.034: 0.0816% ( 3) 00:07:44.950 5873.034 - 5898.240: 0.1283% ( 8) 00:07:44.950 5898.240 - 5923.446: 0.1807% ( 9) 00:07:44.950 5923.446 - 5948.652: 0.2740% ( 16) 00:07:44.950 5948.652 - 5973.858: 0.3673% ( 16) 00:07:44.950 5973.858 - 5999.065: 0.5014% ( 23) 00:07:44.950 5999.065 - 6024.271: 0.6180% ( 20) 00:07:44.950 6024.271 - 6049.477: 0.8570% ( 41) 00:07:44.950 6049.477 - 6074.683: 1.0669% ( 36) 00:07:44.950 6074.683 - 6099.889: 1.2768% ( 36) 00:07:44.950 6099.889 - 6125.095: 1.5858% ( 53) 00:07:44.950 6125.095 - 6150.302: 1.9123% ( 56) 00:07:44.950 6150.302 - 6175.508: 2.3904% ( 82) 00:07:44.950 6175.508 - 6200.714: 3.0259% ( 109) 00:07:44.950 6200.714 - 6225.920: 3.6031% ( 99) 00:07:44.950 6225.920 - 6251.126: 4.1744% ( 98) 00:07:44.950 6251.126 - 6276.332: 4.9090% ( 126) 00:07:44.950 6276.332 - 6301.538: 5.7544% ( 145) 00:07:44.950 6301.538 - 6326.745: 6.6581% ( 155) 00:07:44.950 6326.745 - 6351.951: 7.5910% ( 160) 00:07:44.950 6351.951 - 6377.157: 8.6812% ( 187) 00:07:44.950 6377.157 - 6402.363: 9.9464% ( 217) 00:07:44.950 6402.363 - 6427.569: 11.1765% ( 211) 00:07:44.950 6427.569 - 6452.775: 12.5641% ( 238) 00:07:44.950 6452.775 - 6503.188: 15.4209% ( 490) 00:07:44.950 6503.188 - 6553.600: 18.5051% ( 529) 00:07:44.950 6553.600 - 6604.012: 21.7001% ( 548) 00:07:44.950 6604.012 - 6654.425: 25.1807% ( 597) 00:07:44.950 6654.425 - 6704.837: 28.7197% ( 607) 00:07:44.950 6704.837 - 6755.249: 32.4569% ( 641) 00:07:44.950 6755.249 - 6805.662: 36.2232% ( 646) 00:07:44.950 6805.662 - 6856.074: 40.1294% ( 670) 00:07:44.950 6856.074 - 6906.486: 44.0182% ( 667) 00:07:44.950 6906.486 - 6956.898: 47.8370% ( 655) 00:07:44.950 6956.898 - 7007.311: 51.6733% ( 658) 00:07:44.950 7007.311 - 7057.723: 55.5154% ( 659) 00:07:44.950 7057.723 - 7108.135: 59.1126% ( 617) 00:07:44.950 7108.135 - 7158.548: 62.5758% ( 594) 00:07:44.950 7158.548 - 7208.960: 65.8874% ( 568) 00:07:44.950 7208.960 - 7259.372: 68.8724% ( 512) 00:07:44.950 7259.372 - 7309.785: 71.6185% ( 471) 00:07:44.950 7309.785 - 7360.197: 74.0147% ( 411) 00:07:44.950 7360.197 - 7410.609: 76.0203% ( 344) 00:07:44.950 7410.609 - 7461.022: 77.7460% ( 296) 00:07:44.950 7461.022 - 7511.434: 79.2969% ( 266) 00:07:44.950 7511.434 - 7561.846: 80.6145% ( 226) 00:07:44.950 7561.846 - 7612.258: 81.7572% ( 196) 00:07:44.950 7612.258 - 7662.671: 82.6084% ( 146) 00:07:44.950 7662.671 - 7713.083: 83.4014% ( 136) 00:07:44.950 7713.083 - 7763.495: 84.0485% ( 111) 00:07:44.950 7763.495 - 7813.908: 84.6607% ( 105) 00:07:44.950 7813.908 - 7864.320: 85.1446% ( 83) 00:07:44.950 7864.320 - 7914.732: 85.5877% ( 76) 00:07:44.950 7914.732 - 7965.145: 85.9783% ( 67) 00:07:44.950 7965.145 - 8015.557: 86.3806% ( 69) 00:07:44.950 8015.557 - 8065.969: 86.7362% ( 61) 00:07:44.950 8065.969 - 8116.382: 86.9869% ( 43) 00:07:44.950 8116.382 - 8166.794: 87.2668% ( 48) 00:07:44.950 8166.794 - 8217.206: 87.5233% ( 44) 00:07:44.950 8217.206 - 8267.618: 87.7973% ( 47) 00:07:44.950 8267.618 - 8318.031: 88.0364% ( 41) 00:07:44.950 8318.031 - 8368.443: 88.2229% ( 32) 00:07:44.950 8368.443 - 8418.855: 88.4328% ( 36) 00:07:44.950 8418.855 - 8469.268: 88.6311% ( 34) 00:07:44.950 8469.268 - 8519.680: 88.8235% ( 33) 00:07:44.950 8519.680 - 8570.092: 89.0450% ( 38) 00:07:44.950 8570.092 - 8620.505: 89.3074% ( 45) 00:07:44.950 8620.505 - 8670.917: 89.4998% ( 33) 00:07:44.950 8670.917 - 8721.329: 89.6514% ( 26) 00:07:44.950 8721.329 - 8771.742: 89.7971% ( 25) 00:07:44.950 8771.742 - 8822.154: 89.9312% ( 23) 00:07:44.950 8822.154 - 8872.566: 90.0595% ( 22) 00:07:44.950 8872.566 - 8922.978: 90.1528% ( 16) 00:07:44.950 8922.978 - 8973.391: 90.2694% ( 20) 00:07:44.950 8973.391 - 9023.803: 90.3801% ( 19) 00:07:44.950 9023.803 - 9074.215: 90.5026% ( 21) 00:07:44.950 9074.215 - 9124.628: 90.6017% ( 17) 00:07:44.950 9124.628 - 9175.040: 90.7241% ( 21) 00:07:44.950 9175.040 - 9225.452: 90.8174% ( 16) 00:07:44.950 9225.452 - 9275.865: 90.8990% ( 14) 00:07:44.950 9275.865 - 9326.277: 90.9748% ( 13) 00:07:44.950 9326.277 - 9376.689: 91.0623% ( 15) 00:07:44.950 9376.689 - 9427.102: 91.1847% ( 21) 00:07:44.950 9427.102 - 9477.514: 91.2838% ( 17) 00:07:44.950 9477.514 - 9527.926: 91.3829% ( 17) 00:07:44.950 9527.926 - 9578.338: 91.4646% ( 14) 00:07:44.950 9578.338 - 9628.751: 91.5287% ( 11) 00:07:44.950 9628.751 - 9679.163: 91.6395% ( 19) 00:07:44.950 9679.163 - 9729.575: 91.7211% ( 14) 00:07:44.950 9729.575 - 9779.988: 91.8027% ( 14) 00:07:44.950 9779.988 - 9830.400: 91.8902% ( 15) 00:07:44.950 9830.400 - 9880.812: 91.9951% ( 18) 00:07:44.950 9880.812 - 9931.225: 92.0942% ( 17) 00:07:44.950 9931.225 - 9981.637: 92.2108% ( 20) 00:07:44.950 9981.637 - 10032.049: 92.3333% ( 21) 00:07:44.950 10032.049 - 10082.462: 92.4790% ( 25) 00:07:44.950 10082.462 - 10132.874: 92.5898% ( 19) 00:07:44.950 10132.874 - 10183.286: 92.7122% ( 21) 00:07:44.950 10183.286 - 10233.698: 92.8463% ( 23) 00:07:44.950 10233.698 - 10284.111: 92.9862% ( 24) 00:07:44.950 10284.111 - 10334.523: 93.1320% ( 25) 00:07:44.950 10334.523 - 10384.935: 93.2778% ( 25) 00:07:44.950 10384.935 - 10435.348: 93.4118% ( 23) 00:07:44.950 10435.348 - 10485.760: 93.5693% ( 27) 00:07:44.950 10485.760 - 10536.172: 93.7092% ( 24) 00:07:44.950 10536.172 - 10586.585: 93.9074% ( 34) 00:07:44.950 10586.585 - 10636.997: 94.1056% ( 34) 00:07:44.950 10636.997 - 10687.409: 94.3097% ( 35) 00:07:44.950 10687.409 - 10737.822: 94.5546% ( 42) 00:07:44.950 10737.822 - 10788.234: 94.8111% ( 44) 00:07:44.950 10788.234 - 10838.646: 95.0443% ( 40) 00:07:44.950 10838.646 - 10889.058: 95.2892% ( 42) 00:07:44.950 10889.058 - 10939.471: 95.5282% ( 41) 00:07:44.950 10939.471 - 10989.883: 95.8081% ( 48) 00:07:44.950 10989.883 - 11040.295: 96.0471% ( 41) 00:07:44.950 11040.295 - 11090.708: 96.2687% ( 38) 00:07:44.950 11090.708 - 11141.120: 96.4494% ( 31) 00:07:44.950 11141.120 - 11191.532: 96.6418% ( 33) 00:07:44.950 11191.532 - 11241.945: 96.8342% ( 33) 00:07:44.950 11241.945 - 11292.357: 97.0091% ( 30) 00:07:44.950 11292.357 - 11342.769: 97.2015% ( 33) 00:07:44.950 11342.769 - 11393.182: 97.3764% ( 30) 00:07:44.950 11393.182 - 11443.594: 97.5222% ( 25) 00:07:44.950 11443.594 - 11494.006: 97.7146% ( 33) 00:07:44.950 11494.006 - 11544.418: 97.8720% ( 27) 00:07:44.950 11544.418 - 11594.831: 98.0294% ( 27) 00:07:44.950 11594.831 - 11645.243: 98.1402% ( 19) 00:07:44.950 11645.243 - 11695.655: 98.2509% ( 19) 00:07:44.950 11695.655 - 11746.068: 98.3092% ( 10) 00:07:44.950 11746.068 - 11796.480: 98.3734% ( 11) 00:07:44.950 11796.480 - 11846.892: 98.4200% ( 8) 00:07:44.950 11846.892 - 11897.305: 98.4550% ( 6) 00:07:44.950 11897.305 - 11947.717: 98.5016% ( 8) 00:07:44.950 11947.717 - 11998.129: 98.5308% ( 5) 00:07:44.950 11998.129 - 12048.542: 98.5541% ( 4) 00:07:44.950 12048.542 - 12098.954: 98.5716% ( 3) 00:07:44.950 12098.954 - 12149.366: 98.5891% ( 3) 00:07:44.950 12149.366 - 12199.778: 98.6124% ( 4) 00:07:44.950 12199.778 - 12250.191: 98.6299% ( 3) 00:07:44.950 12250.191 - 12300.603: 98.6474% ( 3) 00:07:44.950 12300.603 - 12351.015: 98.6765% ( 5) 00:07:44.950 12351.015 - 12401.428: 98.6882% ( 2) 00:07:44.950 12401.428 - 12451.840: 98.7057% ( 3) 00:07:44.950 12451.840 - 12502.252: 98.7290% ( 4) 00:07:44.950 12502.252 - 12552.665: 98.7465% ( 3) 00:07:44.950 12552.665 - 12603.077: 98.7815% ( 6) 00:07:44.950 12603.077 - 12653.489: 98.8048% ( 4) 00:07:44.950 12653.489 - 12703.902: 98.8340% ( 5) 00:07:44.950 12703.902 - 12754.314: 98.8573% ( 4) 00:07:44.950 12754.314 - 12804.726: 98.8864% ( 5) 00:07:44.950 12804.726 - 12855.138: 98.9214% ( 6) 00:07:44.950 12855.138 - 12905.551: 98.9447% ( 4) 00:07:44.950 12905.551 - 13006.375: 99.0089% ( 11) 00:07:44.951 13006.375 - 13107.200: 99.0613% ( 9) 00:07:44.951 13107.200 - 13208.025: 99.1080% ( 8) 00:07:44.951 13208.025 - 13308.849: 99.1430% ( 6) 00:07:44.951 13308.849 - 13409.674: 99.1779% ( 6) 00:07:44.951 13409.674 - 13510.498: 99.2129% ( 6) 00:07:44.951 13510.498 - 13611.323: 99.2479% ( 6) 00:07:44.951 13611.323 - 13712.148: 99.2537% ( 1) 00:07:44.951 25407.803 - 25508.628: 99.2596% ( 1) 00:07:44.951 25508.628 - 25609.452: 99.2829% ( 4) 00:07:44.951 25609.452 - 25710.277: 99.3062% ( 4) 00:07:44.951 25710.277 - 25811.102: 99.3237% ( 3) 00:07:44.951 25811.102 - 26012.751: 99.3645% ( 7) 00:07:44.951 26012.751 - 26214.400: 99.4170% ( 9) 00:07:44.951 26214.400 - 26416.049: 99.4636% ( 8) 00:07:44.951 26416.049 - 26617.698: 99.5103% ( 8) 00:07:44.951 26617.698 - 26819.348: 99.5511% ( 7) 00:07:44.951 26819.348 - 27020.997: 99.6035% ( 9) 00:07:44.951 27020.997 - 27222.646: 99.6269% ( 4) 00:07:44.951 30449.034 - 30650.683: 99.6444% ( 3) 00:07:44.951 30650.683 - 30852.332: 99.6910% ( 8) 00:07:44.951 30852.332 - 31053.982: 99.7260% ( 6) 00:07:44.951 31053.982 - 31255.631: 99.7843% ( 10) 00:07:44.951 31255.631 - 31457.280: 99.8309% ( 8) 00:07:44.951 31457.280 - 31658.929: 99.8776% ( 8) 00:07:44.951 31658.929 - 31860.578: 99.9184% ( 7) 00:07:44.951 31860.578 - 32062.228: 99.9650% ( 8) 00:07:44.951 32062.228 - 32263.877: 100.0000% ( 6) 00:07:44.951 00:07:44.951 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:44.951 ============================================================================== 00:07:44.951 Range in us Cumulative IO count 00:07:44.951 5772.209 - 5797.415: 0.0292% ( 5) 00:07:44.951 5797.415 - 5822.622: 0.0408% ( 2) 00:07:44.951 5822.622 - 5847.828: 0.0525% ( 2) 00:07:44.951 5873.034 - 5898.240: 0.0583% ( 1) 00:07:44.951 5898.240 - 5923.446: 0.0816% ( 4) 00:07:44.951 5923.446 - 5948.652: 0.1049% ( 4) 00:07:44.951 5948.652 - 5973.858: 0.1399% ( 6) 00:07:44.951 5973.858 - 5999.065: 0.1866% ( 8) 00:07:44.951 5999.065 - 6024.271: 0.3265% ( 24) 00:07:44.951 6024.271 - 6049.477: 0.4081% ( 14) 00:07:44.951 6049.477 - 6074.683: 0.4897% ( 14) 00:07:44.951 6074.683 - 6099.889: 0.6530% ( 28) 00:07:44.951 6099.889 - 6125.095: 0.8745% ( 38) 00:07:44.951 6125.095 - 6150.302: 1.1719% ( 51) 00:07:44.951 6150.302 - 6175.508: 1.5217% ( 60) 00:07:44.951 6175.508 - 6200.714: 1.8074% ( 49) 00:07:44.951 6200.714 - 6225.920: 2.1922% ( 66) 00:07:44.951 6225.920 - 6251.126: 2.6994% ( 87) 00:07:44.951 6251.126 - 6276.332: 3.3757% ( 116) 00:07:44.951 6276.332 - 6301.538: 4.0170% ( 110) 00:07:44.951 6301.538 - 6326.745: 4.7808% ( 131) 00:07:44.951 6326.745 - 6351.951: 5.7603% ( 168) 00:07:44.951 6351.951 - 6377.157: 6.5882% ( 142) 00:07:44.951 6377.157 - 6402.363: 7.5851% ( 171) 00:07:44.951 6402.363 - 6427.569: 8.8153% ( 211) 00:07:44.951 6427.569 - 6452.775: 9.9872% ( 201) 00:07:44.951 6452.775 - 6503.188: 12.7857% ( 480) 00:07:44.951 6503.188 - 6553.600: 16.1147% ( 571) 00:07:44.951 6553.600 - 6604.012: 19.7586% ( 625) 00:07:44.951 6604.012 - 6654.425: 23.4608% ( 635) 00:07:44.951 6654.425 - 6704.837: 27.3146% ( 661) 00:07:44.951 6704.837 - 6755.249: 31.5065% ( 719) 00:07:44.951 6755.249 - 6805.662: 35.9083% ( 755) 00:07:44.951 6805.662 - 6856.074: 40.2344% ( 742) 00:07:44.951 6856.074 - 6906.486: 44.6770% ( 762) 00:07:44.951 6906.486 - 6956.898: 48.9914% ( 740) 00:07:44.951 6956.898 - 7007.311: 53.3057% ( 740) 00:07:44.951 7007.311 - 7057.723: 57.4394% ( 709) 00:07:44.951 7057.723 - 7108.135: 61.2757% ( 658) 00:07:44.951 7108.135 - 7158.548: 64.8379% ( 611) 00:07:44.951 7158.548 - 7208.960: 68.1728% ( 572) 00:07:44.951 7208.960 - 7259.372: 71.1287% ( 507) 00:07:44.951 7259.372 - 7309.785: 73.6474% ( 432) 00:07:44.951 7309.785 - 7360.197: 75.5947% ( 334) 00:07:44.951 7360.197 - 7410.609: 77.4079% ( 311) 00:07:44.951 7410.609 - 7461.022: 78.9937% ( 272) 00:07:44.951 7461.022 - 7511.434: 80.2822% ( 221) 00:07:44.951 7511.434 - 7561.846: 81.4657% ( 203) 00:07:44.951 7561.846 - 7612.258: 82.4160% ( 163) 00:07:44.951 7612.258 - 7662.671: 83.2264% ( 139) 00:07:44.951 7662.671 - 7713.083: 83.8619% ( 109) 00:07:44.951 7713.083 - 7763.495: 84.4391% ( 99) 00:07:44.951 7763.495 - 7813.908: 84.9172% ( 82) 00:07:44.951 7813.908 - 7864.320: 85.2787% ( 62) 00:07:44.951 7864.320 - 7914.732: 85.5819% ( 52) 00:07:44.951 7914.732 - 7965.145: 85.9025% ( 55) 00:07:44.951 7965.145 - 8015.557: 86.1824% ( 48) 00:07:44.951 8015.557 - 8065.969: 86.4681% ( 49) 00:07:44.951 8065.969 - 8116.382: 86.7071% ( 41) 00:07:44.951 8116.382 - 8166.794: 86.9578% ( 43) 00:07:44.951 8166.794 - 8217.206: 87.1793% ( 38) 00:07:44.951 8217.206 - 8267.618: 87.4184% ( 41) 00:07:44.951 8267.618 - 8318.031: 87.6691% ( 43) 00:07:44.951 8318.031 - 8368.443: 87.9139% ( 42) 00:07:44.951 8368.443 - 8418.855: 88.1472% ( 40) 00:07:44.951 8418.855 - 8469.268: 88.3687% ( 38) 00:07:44.951 8469.268 - 8519.680: 88.6019% ( 40) 00:07:44.951 8519.680 - 8570.092: 88.7943% ( 33) 00:07:44.951 8570.092 - 8620.505: 89.0100% ( 37) 00:07:44.951 8620.505 - 8670.917: 89.2374% ( 39) 00:07:44.951 8670.917 - 8721.329: 89.4415% ( 35) 00:07:44.951 8721.329 - 8771.742: 89.6339% ( 33) 00:07:44.951 8771.742 - 8822.154: 89.8146% ( 31) 00:07:44.951 8822.154 - 8872.566: 89.9895% ( 30) 00:07:44.951 8872.566 - 8922.978: 90.1469% ( 27) 00:07:44.951 8922.978 - 8973.391: 90.3160% ( 29) 00:07:44.951 8973.391 - 9023.803: 90.4618% ( 25) 00:07:44.951 9023.803 - 9074.215: 90.5900% ( 22) 00:07:44.951 9074.215 - 9124.628: 90.7241% ( 23) 00:07:44.951 9124.628 - 9175.040: 90.8699% ( 25) 00:07:44.951 9175.040 - 9225.452: 91.0098% ( 24) 00:07:44.951 9225.452 - 9275.865: 91.1381% ( 22) 00:07:44.951 9275.865 - 9326.277: 91.2313% ( 16) 00:07:44.951 9326.277 - 9376.689: 91.3130% ( 14) 00:07:44.951 9376.689 - 9427.102: 91.4062% ( 16) 00:07:44.951 9427.102 - 9477.514: 91.4704% ( 11) 00:07:44.951 9477.514 - 9527.926: 91.5229% ( 9) 00:07:44.951 9527.926 - 9578.338: 91.5812% ( 10) 00:07:44.951 9578.338 - 9628.751: 91.6278% ( 8) 00:07:44.951 9628.751 - 9679.163: 91.6803% ( 9) 00:07:44.951 9679.163 - 9729.575: 91.7677% ( 15) 00:07:44.951 9729.575 - 9779.988: 91.8843% ( 20) 00:07:44.951 9779.988 - 9830.400: 91.9893% ( 18) 00:07:44.951 9830.400 - 9880.812: 92.1059% ( 20) 00:07:44.951 9880.812 - 9931.225: 92.2458% ( 24) 00:07:44.951 9931.225 - 9981.637: 92.3799% ( 23) 00:07:44.951 9981.637 - 10032.049: 92.5257% ( 25) 00:07:44.951 10032.049 - 10082.462: 92.6656% ( 24) 00:07:44.951 10082.462 - 10132.874: 92.8172% ( 26) 00:07:44.951 10132.874 - 10183.286: 93.0037% ( 32) 00:07:44.951 10183.286 - 10233.698: 93.1495% ( 25) 00:07:44.951 10233.698 - 10284.111: 93.3127% ( 28) 00:07:44.951 10284.111 - 10334.523: 93.4993% ( 32) 00:07:44.951 10334.523 - 10384.935: 93.6859% ( 32) 00:07:44.951 10384.935 - 10435.348: 93.8491% ( 28) 00:07:44.951 10435.348 - 10485.760: 94.0240% ( 30) 00:07:44.951 10485.760 - 10536.172: 94.1989% ( 30) 00:07:44.951 10536.172 - 10586.585: 94.3738% ( 30) 00:07:44.951 10586.585 - 10636.997: 94.5254% ( 26) 00:07:44.951 10636.997 - 10687.409: 94.6770% ( 26) 00:07:44.951 10687.409 - 10737.822: 94.8228% ( 25) 00:07:44.951 10737.822 - 10788.234: 94.9394% ( 20) 00:07:44.951 10788.234 - 10838.646: 95.0851% ( 25) 00:07:44.951 10838.646 - 10889.058: 95.2833% ( 34) 00:07:44.951 10889.058 - 10939.471: 95.4991% ( 37) 00:07:44.951 10939.471 - 10989.883: 95.6565% ( 27) 00:07:44.951 10989.883 - 11040.295: 95.8605% ( 35) 00:07:44.951 11040.295 - 11090.708: 96.0413% ( 31) 00:07:44.951 11090.708 - 11141.120: 96.2512% ( 36) 00:07:44.951 11141.120 - 11191.532: 96.4494% ( 34) 00:07:44.951 11191.532 - 11241.945: 96.6418% ( 33) 00:07:44.951 11241.945 - 11292.357: 96.7992% ( 27) 00:07:44.951 11292.357 - 11342.769: 96.9741% ( 30) 00:07:44.951 11342.769 - 11393.182: 97.1199% ( 25) 00:07:44.951 11393.182 - 11443.594: 97.2948% ( 30) 00:07:44.951 11443.594 - 11494.006: 97.4347% ( 24) 00:07:44.951 11494.006 - 11544.418: 97.6038% ( 29) 00:07:44.951 11544.418 - 11594.831: 97.7379% ( 23) 00:07:44.951 11594.831 - 11645.243: 97.8778% ( 24) 00:07:44.951 11645.243 - 11695.655: 98.0119% ( 23) 00:07:44.951 11695.655 - 11746.068: 98.0993% ( 15) 00:07:44.951 11746.068 - 11796.480: 98.1751% ( 13) 00:07:44.951 11796.480 - 11846.892: 98.2276% ( 9) 00:07:44.951 11846.892 - 11897.305: 98.2976% ( 12) 00:07:44.951 11897.305 - 11947.717: 98.3442% ( 8) 00:07:44.951 11947.717 - 11998.129: 98.3909% ( 8) 00:07:44.951 11998.129 - 12048.542: 98.4375% ( 8) 00:07:44.951 12048.542 - 12098.954: 98.4783% ( 7) 00:07:44.951 12098.954 - 12149.366: 98.5250% ( 8) 00:07:44.951 12149.366 - 12199.778: 98.5716% ( 8) 00:07:44.951 12199.778 - 12250.191: 98.6124% ( 7) 00:07:44.951 12250.191 - 12300.603: 98.6357% ( 4) 00:07:44.951 12300.603 - 12351.015: 98.6649% ( 5) 00:07:44.951 12351.015 - 12401.428: 98.6824% ( 3) 00:07:44.951 12401.428 - 12451.840: 98.6999% ( 3) 00:07:44.951 12451.840 - 12502.252: 98.7115% ( 2) 00:07:44.951 12502.252 - 12552.665: 98.7232% ( 2) 00:07:44.951 12603.077 - 12653.489: 98.7348% ( 2) 00:07:44.951 12653.489 - 12703.902: 98.7465% ( 2) 00:07:44.951 12703.902 - 12754.314: 98.7873% ( 7) 00:07:44.951 12754.314 - 12804.726: 98.8165% ( 5) 00:07:44.951 12804.726 - 12855.138: 98.8456% ( 5) 00:07:44.951 12855.138 - 12905.551: 98.8923% ( 8) 00:07:44.951 12905.551 - 13006.375: 98.9564% ( 11) 00:07:44.951 13006.375 - 13107.200: 99.0147% ( 10) 00:07:44.951 13107.200 - 13208.025: 99.0788% ( 11) 00:07:44.951 13208.025 - 13308.849: 99.1546% ( 13) 00:07:44.951 13308.849 - 13409.674: 99.1954% ( 7) 00:07:44.951 13409.674 - 13510.498: 99.2421% ( 8) 00:07:44.951 13510.498 - 13611.323: 99.2537% ( 2) 00:07:44.951 24197.908 - 24298.732: 99.2654% ( 2) 00:07:44.952 24298.732 - 24399.557: 99.2887% ( 4) 00:07:44.952 24399.557 - 24500.382: 99.3120% ( 4) 00:07:44.952 24500.382 - 24601.206: 99.3354% ( 4) 00:07:44.952 24601.206 - 24702.031: 99.3587% ( 4) 00:07:44.952 24702.031 - 24802.855: 99.3820% ( 4) 00:07:44.952 24802.855 - 24903.680: 99.4053% ( 4) 00:07:44.952 24903.680 - 25004.505: 99.4345% ( 5) 00:07:44.952 25004.505 - 25105.329: 99.4578% ( 4) 00:07:44.952 25105.329 - 25206.154: 99.4811% ( 4) 00:07:44.952 25206.154 - 25306.978: 99.5044% ( 4) 00:07:44.952 25306.978 - 25407.803: 99.5336% ( 5) 00:07:44.952 25407.803 - 25508.628: 99.5511% ( 3) 00:07:44.952 25508.628 - 25609.452: 99.5802% ( 5) 00:07:44.952 25609.452 - 25710.277: 99.6035% ( 4) 00:07:44.952 25710.277 - 25811.102: 99.6269% ( 4) 00:07:44.952 28835.840 - 29037.489: 99.6502% ( 4) 00:07:44.952 29037.489 - 29239.138: 99.6968% ( 8) 00:07:44.952 29239.138 - 29440.788: 99.7376% ( 7) 00:07:44.952 29440.788 - 29642.437: 99.7901% ( 9) 00:07:44.952 29642.437 - 29844.086: 99.8426% ( 9) 00:07:44.952 29844.086 - 30045.735: 99.8892% ( 8) 00:07:44.952 30045.735 - 30247.385: 99.9417% ( 9) 00:07:44.952 30247.385 - 30449.034: 99.9942% ( 9) 00:07:44.952 30449.034 - 30650.683: 100.0000% ( 1) 00:07:44.952 00:07:44.952 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:44.952 ============================================================================== 00:07:44.952 Range in us Cumulative IO count 00:07:44.952 5898.240 - 5923.446: 0.0408% ( 7) 00:07:44.952 5923.446 - 5948.652: 0.0991% ( 10) 00:07:44.952 5948.652 - 5973.858: 0.1458% ( 8) 00:07:44.952 5973.858 - 5999.065: 0.2157% ( 12) 00:07:44.952 5999.065 - 6024.271: 0.3265% ( 19) 00:07:44.952 6024.271 - 6049.477: 0.4373% ( 19) 00:07:44.952 6049.477 - 6074.683: 0.6238% ( 32) 00:07:44.952 6074.683 - 6099.889: 0.8104% ( 32) 00:07:44.952 6099.889 - 6125.095: 1.0494% ( 41) 00:07:44.952 6125.095 - 6150.302: 1.3351% ( 49) 00:07:44.952 6150.302 - 6175.508: 1.7432% ( 70) 00:07:44.952 6175.508 - 6200.714: 2.1630% ( 72) 00:07:44.952 6200.714 - 6225.920: 2.6411% ( 82) 00:07:44.952 6225.920 - 6251.126: 3.1308% ( 84) 00:07:44.952 6251.126 - 6276.332: 3.7197% ( 101) 00:07:44.952 6276.332 - 6301.538: 4.3493% ( 108) 00:07:44.952 6301.538 - 6326.745: 5.2472% ( 154) 00:07:44.952 6326.745 - 6351.951: 6.1684% ( 158) 00:07:44.952 6351.951 - 6377.157: 7.1129% ( 162) 00:07:44.952 6377.157 - 6402.363: 8.1040% ( 170) 00:07:44.952 6402.363 - 6427.569: 9.2292% ( 193) 00:07:44.952 6427.569 - 6452.775: 10.4886% ( 216) 00:07:44.952 6452.775 - 6503.188: 13.3862% ( 497) 00:07:44.952 6503.188 - 6553.600: 16.6978% ( 568) 00:07:44.952 6553.600 - 6604.012: 20.4116% ( 637) 00:07:44.952 6604.012 - 6654.425: 24.1954% ( 649) 00:07:44.952 6654.425 - 6704.837: 28.1308% ( 675) 00:07:44.952 6704.837 - 6755.249: 32.3111% ( 717) 00:07:44.952 6755.249 - 6805.662: 36.5730% ( 731) 00:07:44.952 6805.662 - 6856.074: 40.9865% ( 757) 00:07:44.952 6856.074 - 6906.486: 45.3242% ( 744) 00:07:44.952 6906.486 - 6956.898: 49.6677% ( 745) 00:07:44.952 6956.898 - 7007.311: 53.9062% ( 727) 00:07:44.952 7007.311 - 7057.723: 57.8708% ( 680) 00:07:44.952 7057.723 - 7108.135: 61.6954% ( 656) 00:07:44.952 7108.135 - 7158.548: 65.1936% ( 600) 00:07:44.952 7158.548 - 7208.960: 68.4118% ( 552) 00:07:44.952 7208.960 - 7259.372: 71.3969% ( 512) 00:07:44.952 7259.372 - 7309.785: 73.8748% ( 425) 00:07:44.952 7309.785 - 7360.197: 75.9095% ( 349) 00:07:44.952 7360.197 - 7410.609: 77.6294% ( 295) 00:07:44.952 7410.609 - 7461.022: 79.0812% ( 249) 00:07:44.952 7461.022 - 7511.434: 80.3230% ( 213) 00:07:44.952 7511.434 - 7561.846: 81.3958% ( 184) 00:07:44.952 7561.846 - 7612.258: 82.2936% ( 154) 00:07:44.952 7612.258 - 7662.671: 83.0340% ( 127) 00:07:44.952 7662.671 - 7713.083: 83.6112% ( 99) 00:07:44.952 7713.083 - 7763.495: 84.1301% ( 89) 00:07:44.952 7763.495 - 7813.908: 84.4916% ( 62) 00:07:44.952 7813.908 - 7864.320: 84.7889% ( 51) 00:07:44.952 7864.320 - 7914.732: 85.0513% ( 45) 00:07:44.952 7914.732 - 7965.145: 85.2729% ( 38) 00:07:44.952 7965.145 - 8015.557: 85.4478% ( 30) 00:07:44.952 8015.557 - 8065.969: 85.6285% ( 31) 00:07:44.952 8065.969 - 8116.382: 85.8092% ( 31) 00:07:44.952 8116.382 - 8166.794: 85.9550% ( 25) 00:07:44.952 8166.794 - 8217.206: 86.1357% ( 31) 00:07:44.952 8217.206 - 8267.618: 86.3048% ( 29) 00:07:44.952 8267.618 - 8318.031: 86.4447% ( 24) 00:07:44.952 8318.031 - 8368.443: 86.6604% ( 37) 00:07:44.952 8368.443 - 8418.855: 86.8354% ( 30) 00:07:44.952 8418.855 - 8469.268: 87.0103% ( 30) 00:07:44.952 8469.268 - 8519.680: 87.1735% ( 28) 00:07:44.952 8519.680 - 8570.092: 87.3542% ( 31) 00:07:44.952 8570.092 - 8620.505: 87.5117% ( 27) 00:07:44.952 8620.505 - 8670.917: 87.7041% ( 33) 00:07:44.952 8670.917 - 8721.329: 87.8906% ( 32) 00:07:44.952 8721.329 - 8771.742: 88.1472% ( 44) 00:07:44.952 8771.742 - 8822.154: 88.3920% ( 42) 00:07:44.952 8822.154 - 8872.566: 88.6311% ( 41) 00:07:44.952 8872.566 - 8922.978: 88.8876% ( 44) 00:07:44.952 8922.978 - 8973.391: 89.1558% ( 46) 00:07:44.952 8973.391 - 9023.803: 89.4473% ( 50) 00:07:44.952 9023.803 - 9074.215: 89.7505% ( 52) 00:07:44.952 9074.215 - 9124.628: 90.0770% ( 56) 00:07:44.952 9124.628 - 9175.040: 90.3568% ( 48) 00:07:44.952 9175.040 - 9225.452: 90.6308% ( 47) 00:07:44.952 9225.452 - 9275.865: 90.9165% ( 49) 00:07:44.952 9275.865 - 9326.277: 91.1614% ( 42) 00:07:44.952 9326.277 - 9376.689: 91.4354% ( 47) 00:07:44.952 9376.689 - 9427.102: 91.6978% ( 45) 00:07:44.952 9427.102 - 9477.514: 91.9601% ( 45) 00:07:44.952 9477.514 - 9527.926: 92.2225% ( 45) 00:07:44.952 9527.926 - 9578.338: 92.4382% ( 37) 00:07:44.952 9578.338 - 9628.751: 92.6539% ( 37) 00:07:44.952 9628.751 - 9679.163: 92.8638% ( 36) 00:07:44.952 9679.163 - 9729.575: 93.0504% ( 32) 00:07:44.952 9729.575 - 9779.988: 93.2136% ( 28) 00:07:44.952 9779.988 - 9830.400: 93.3419% ( 22) 00:07:44.952 9830.400 - 9880.812: 93.4760% ( 23) 00:07:44.952 9880.812 - 9931.225: 93.5868% ( 19) 00:07:44.952 9931.225 - 9981.637: 93.7208% ( 23) 00:07:44.952 9981.637 - 10032.049: 93.8608% ( 24) 00:07:44.952 10032.049 - 10082.462: 93.9890% ( 22) 00:07:44.952 10082.462 - 10132.874: 94.1173% ( 22) 00:07:44.952 10132.874 - 10183.286: 94.2281% ( 19) 00:07:44.952 10183.286 - 10233.698: 94.3563% ( 22) 00:07:44.952 10233.698 - 10284.111: 94.4788% ( 21) 00:07:44.952 10284.111 - 10334.523: 94.5896% ( 19) 00:07:44.952 10334.523 - 10384.935: 94.6887% ( 17) 00:07:44.952 10384.935 - 10435.348: 94.8053% ( 20) 00:07:44.952 10435.348 - 10485.760: 94.8811% ( 13) 00:07:44.952 10485.760 - 10536.172: 94.9627% ( 14) 00:07:44.952 10536.172 - 10586.585: 95.0560% ( 16) 00:07:44.952 10586.585 - 10636.997: 95.1376% ( 14) 00:07:44.952 10636.997 - 10687.409: 95.2250% ( 15) 00:07:44.952 10687.409 - 10737.822: 95.3125% ( 15) 00:07:44.952 10737.822 - 10788.234: 95.4058% ( 16) 00:07:44.952 10788.234 - 10838.646: 95.4816% ( 13) 00:07:44.952 10838.646 - 10889.058: 95.5865% ( 18) 00:07:44.952 10889.058 - 10939.471: 95.7090% ( 21) 00:07:44.952 10939.471 - 10989.883: 95.7964% ( 15) 00:07:44.952 10989.883 - 11040.295: 95.9247% ( 22) 00:07:44.952 11040.295 - 11090.708: 96.0238% ( 17) 00:07:44.952 11090.708 - 11141.120: 96.1346% ( 19) 00:07:44.952 11141.120 - 11191.532: 96.2861% ( 26) 00:07:44.952 11191.532 - 11241.945: 96.3911% ( 18) 00:07:44.952 11241.945 - 11292.357: 96.5252% ( 23) 00:07:44.952 11292.357 - 11342.769: 96.6593% ( 23) 00:07:44.952 11342.769 - 11393.182: 96.7759% ( 20) 00:07:44.952 11393.182 - 11443.594: 96.9100% ( 23) 00:07:44.952 11443.594 - 11494.006: 97.0324% ( 21) 00:07:44.952 11494.006 - 11544.418: 97.1374% ( 18) 00:07:44.952 11544.418 - 11594.831: 97.2481% ( 19) 00:07:44.952 11594.831 - 11645.243: 97.3706% ( 21) 00:07:44.952 11645.243 - 11695.655: 97.4988% ( 22) 00:07:44.952 11695.655 - 11746.068: 97.5979% ( 17) 00:07:44.952 11746.068 - 11796.480: 97.7087% ( 19) 00:07:44.952 11796.480 - 11846.892: 97.8253% ( 20) 00:07:44.952 11846.892 - 11897.305: 97.9361% ( 19) 00:07:44.952 11897.305 - 11947.717: 98.0236% ( 15) 00:07:44.952 11947.717 - 11998.129: 98.1110% ( 15) 00:07:44.952 11998.129 - 12048.542: 98.1868% ( 13) 00:07:44.952 12048.542 - 12098.954: 98.2801% ( 16) 00:07:44.952 12098.954 - 12149.366: 98.3559% ( 13) 00:07:44.952 12149.366 - 12199.778: 98.4258% ( 12) 00:07:44.952 12199.778 - 12250.191: 98.4725% ( 8) 00:07:44.952 12250.191 - 12300.603: 98.5133% ( 7) 00:07:44.952 12300.603 - 12351.015: 98.5541% ( 7) 00:07:44.952 12351.015 - 12401.428: 98.6007% ( 8) 00:07:44.952 12401.428 - 12451.840: 98.6416% ( 7) 00:07:44.952 12451.840 - 12502.252: 98.6824% ( 7) 00:07:44.952 12502.252 - 12552.665: 98.7290% ( 8) 00:07:44.952 12552.665 - 12603.077: 98.7757% ( 8) 00:07:44.952 12603.077 - 12653.489: 98.8223% ( 8) 00:07:44.952 12653.489 - 12703.902: 98.8631% ( 7) 00:07:44.952 12703.902 - 12754.314: 98.9097% ( 8) 00:07:44.952 12754.314 - 12804.726: 98.9564% ( 8) 00:07:44.952 12804.726 - 12855.138: 98.9855% ( 5) 00:07:44.952 12855.138 - 12905.551: 99.0089% ( 4) 00:07:44.952 12905.551 - 13006.375: 99.0555% ( 8) 00:07:44.952 13006.375 - 13107.200: 99.1021% ( 8) 00:07:44.952 13107.200 - 13208.025: 99.1430% ( 7) 00:07:44.952 13208.025 - 13308.849: 99.1896% ( 8) 00:07:44.952 13308.849 - 13409.674: 99.2304% ( 7) 00:07:44.952 13409.674 - 13510.498: 99.2537% ( 4) 00:07:44.952 22988.012 - 23088.837: 99.2712% ( 3) 00:07:44.952 23088.837 - 23189.662: 99.2945% ( 4) 00:07:44.952 23189.662 - 23290.486: 99.3179% ( 4) 00:07:44.952 23290.486 - 23391.311: 99.3412% ( 4) 00:07:44.952 23391.311 - 23492.135: 99.3645% ( 4) 00:07:44.952 23492.135 - 23592.960: 99.3937% ( 5) 00:07:44.952 23592.960 - 23693.785: 99.4170% ( 4) 00:07:44.952 23693.785 - 23794.609: 99.4403% ( 4) 00:07:44.952 23794.609 - 23895.434: 99.4636% ( 4) 00:07:44.952 23895.434 - 23996.258: 99.4811% ( 3) 00:07:44.952 23996.258 - 24097.083: 99.5103% ( 5) 00:07:44.952 24097.083 - 24197.908: 99.5336% ( 4) 00:07:44.953 24197.908 - 24298.732: 99.5569% ( 4) 00:07:44.953 24298.732 - 24399.557: 99.5802% ( 4) 00:07:44.953 24399.557 - 24500.382: 99.6035% ( 4) 00:07:44.953 24500.382 - 24601.206: 99.6269% ( 4) 00:07:44.953 27625.945 - 27827.594: 99.6618% ( 6) 00:07:44.953 27827.594 - 28029.243: 99.7085% ( 8) 00:07:44.953 28029.243 - 28230.892: 99.7551% ( 8) 00:07:44.953 28230.892 - 28432.542: 99.8018% ( 8) 00:07:44.953 28432.542 - 28634.191: 99.8542% ( 9) 00:07:44.953 28634.191 - 28835.840: 99.9009% ( 8) 00:07:44.953 28835.840 - 29037.489: 99.9475% ( 8) 00:07:44.953 29037.489 - 29239.138: 100.0000% ( 9) 00:07:44.953 00:07:44.953 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:44.953 ============================================================================== 00:07:44.953 Range in us Cumulative IO count 00:07:44.953 5847.828 - 5873.034: 0.0058% ( 1) 00:07:44.953 5873.034 - 5898.240: 0.0292% ( 4) 00:07:44.953 5898.240 - 5923.446: 0.0466% ( 3) 00:07:44.953 5923.446 - 5948.652: 0.0525% ( 1) 00:07:44.953 5948.652 - 5973.858: 0.0816% ( 5) 00:07:44.953 5973.858 - 5999.065: 0.1399% ( 10) 00:07:44.953 5999.065 - 6024.271: 0.1807% ( 7) 00:07:44.953 6024.271 - 6049.477: 0.2565% ( 13) 00:07:44.953 6049.477 - 6074.683: 0.3790% ( 21) 00:07:44.953 6074.683 - 6099.889: 0.5189% ( 24) 00:07:44.953 6099.889 - 6125.095: 0.6705% ( 26) 00:07:44.953 6125.095 - 6150.302: 0.9678% ( 51) 00:07:44.953 6150.302 - 6175.508: 1.3176% ( 60) 00:07:44.953 6175.508 - 6200.714: 1.7607% ( 76) 00:07:44.953 6200.714 - 6225.920: 2.3088% ( 94) 00:07:44.953 6225.920 - 6251.126: 2.7402% ( 74) 00:07:44.953 6251.126 - 6276.332: 3.2474% ( 87) 00:07:44.953 6276.332 - 6301.538: 3.8479% ( 103) 00:07:44.953 6301.538 - 6326.745: 4.6117% ( 131) 00:07:44.953 6326.745 - 6351.951: 5.3930% ( 134) 00:07:44.953 6351.951 - 6377.157: 6.4191% ( 176) 00:07:44.953 6377.157 - 6402.363: 7.5152% ( 188) 00:07:44.953 6402.363 - 6427.569: 8.5413% ( 176) 00:07:44.953 6427.569 - 6452.775: 9.8006% ( 216) 00:07:44.953 6452.775 - 6503.188: 12.6691% ( 492) 00:07:44.953 6503.188 - 6553.600: 16.0972% ( 588) 00:07:44.953 6553.600 - 6604.012: 19.5837% ( 598) 00:07:44.953 6604.012 - 6654.425: 23.4783% ( 668) 00:07:44.953 6654.425 - 6704.837: 27.4079% ( 674) 00:07:44.953 6704.837 - 6755.249: 31.6989% ( 736) 00:07:44.953 6755.249 - 6805.662: 36.0949% ( 754) 00:07:44.953 6805.662 - 6856.074: 40.5142% ( 758) 00:07:44.953 6856.074 - 6906.486: 44.9569% ( 762) 00:07:44.953 6906.486 - 6956.898: 49.3237% ( 749) 00:07:44.953 6956.898 - 7007.311: 53.6614% ( 744) 00:07:44.953 7007.311 - 7057.723: 57.7717% ( 705) 00:07:44.953 7057.723 - 7108.135: 61.7421% ( 681) 00:07:44.953 7108.135 - 7158.548: 65.4792% ( 641) 00:07:44.953 7158.548 - 7208.960: 68.8258% ( 574) 00:07:44.953 7208.960 - 7259.372: 71.7292% ( 498) 00:07:44.953 7259.372 - 7309.785: 74.2887% ( 439) 00:07:44.953 7309.785 - 7360.197: 76.3584% ( 355) 00:07:44.953 7360.197 - 7410.609: 78.0958% ( 298) 00:07:44.953 7410.609 - 7461.022: 79.5767% ( 254) 00:07:44.953 7461.022 - 7511.434: 80.8361% ( 216) 00:07:44.953 7511.434 - 7561.846: 81.8447% ( 173) 00:07:44.953 7561.846 - 7612.258: 82.6901% ( 145) 00:07:44.953 7612.258 - 7662.671: 83.3839% ( 119) 00:07:44.953 7662.671 - 7713.083: 83.9552% ( 98) 00:07:44.953 7713.083 - 7763.495: 84.4275% ( 81) 00:07:44.953 7763.495 - 7813.908: 84.7540% ( 56) 00:07:44.953 7813.908 - 7864.320: 84.9580% ( 35) 00:07:44.953 7864.320 - 7914.732: 85.1388% ( 31) 00:07:44.953 7914.732 - 7965.145: 85.3078% ( 29) 00:07:44.953 7965.145 - 8015.557: 85.4711% ( 28) 00:07:44.953 8015.557 - 8065.969: 85.6168% ( 25) 00:07:44.953 8065.969 - 8116.382: 85.7276% ( 19) 00:07:44.953 8116.382 - 8166.794: 85.8500% ( 21) 00:07:44.953 8166.794 - 8217.206: 85.9841% ( 23) 00:07:44.953 8217.206 - 8267.618: 86.1124% ( 22) 00:07:44.953 8267.618 - 8318.031: 86.2290% ( 20) 00:07:44.953 8318.031 - 8368.443: 86.3689% ( 24) 00:07:44.953 8368.443 - 8418.855: 86.5089% ( 24) 00:07:44.953 8418.855 - 8469.268: 86.6430% ( 23) 00:07:44.953 8469.268 - 8519.680: 86.8004% ( 27) 00:07:44.953 8519.680 - 8570.092: 86.9753% ( 30) 00:07:44.953 8570.092 - 8620.505: 87.1677% ( 33) 00:07:44.953 8620.505 - 8670.917: 87.4242% ( 44) 00:07:44.953 8670.917 - 8721.329: 87.6749% ( 43) 00:07:44.953 8721.329 - 8771.742: 87.9314% ( 44) 00:07:44.953 8771.742 - 8822.154: 88.2463% ( 54) 00:07:44.953 8822.154 - 8872.566: 88.5436% ( 51) 00:07:44.953 8872.566 - 8922.978: 88.8643% ( 55) 00:07:44.953 8922.978 - 8973.391: 89.2316% ( 63) 00:07:44.953 8973.391 - 9023.803: 89.5406% ( 53) 00:07:44.953 9023.803 - 9074.215: 89.8962% ( 61) 00:07:44.953 9074.215 - 9124.628: 90.2227% ( 56) 00:07:44.953 9124.628 - 9175.040: 90.5492% ( 56) 00:07:44.953 9175.040 - 9225.452: 90.8291% ( 48) 00:07:44.953 9225.452 - 9275.865: 91.0914% ( 45) 00:07:44.953 9275.865 - 9326.277: 91.3888% ( 51) 00:07:44.953 9326.277 - 9376.689: 91.6803% ( 50) 00:07:44.953 9376.689 - 9427.102: 91.9485% ( 46) 00:07:44.953 9427.102 - 9477.514: 92.2050% ( 44) 00:07:44.953 9477.514 - 9527.926: 92.4090% ( 35) 00:07:44.953 9527.926 - 9578.338: 92.5898% ( 31) 00:07:44.953 9578.338 - 9628.751: 92.7880% ( 34) 00:07:44.953 9628.751 - 9679.163: 93.0562% ( 46) 00:07:44.953 9679.163 - 9729.575: 93.2544% ( 34) 00:07:44.953 9729.575 - 9779.988: 93.4585% ( 35) 00:07:44.953 9779.988 - 9830.400: 93.6276% ( 29) 00:07:44.953 9830.400 - 9880.812: 93.7850% ( 27) 00:07:44.953 9880.812 - 9931.225: 93.9016% ( 20) 00:07:44.953 9931.225 - 9981.637: 94.0124% ( 19) 00:07:44.953 9981.637 - 10032.049: 94.1348% ( 21) 00:07:44.953 10032.049 - 10082.462: 94.2339% ( 17) 00:07:44.953 10082.462 - 10132.874: 94.3447% ( 19) 00:07:44.953 10132.874 - 10183.286: 94.4846% ( 24) 00:07:44.953 10183.286 - 10233.698: 94.5837% ( 17) 00:07:44.953 10233.698 - 10284.111: 94.6770% ( 16) 00:07:44.953 10284.111 - 10334.523: 94.7528% ( 13) 00:07:44.953 10334.523 - 10384.935: 94.8461% ( 16) 00:07:44.953 10384.935 - 10435.348: 94.9452% ( 17) 00:07:44.953 10435.348 - 10485.760: 95.0210% ( 13) 00:07:44.953 10485.760 - 10536.172: 95.0735% ( 9) 00:07:44.953 10536.172 - 10586.585: 95.1376% ( 11) 00:07:44.953 10586.585 - 10636.997: 95.1959% ( 10) 00:07:44.953 10636.997 - 10687.409: 95.2542% ( 10) 00:07:44.953 10687.409 - 10737.822: 95.3183% ( 11) 00:07:44.953 10737.822 - 10788.234: 95.3883% ( 12) 00:07:44.953 10788.234 - 10838.646: 95.4524% ( 11) 00:07:44.953 10838.646 - 10889.058: 95.5107% ( 10) 00:07:44.953 10889.058 - 10939.471: 95.5749% ( 11) 00:07:44.953 10939.471 - 10989.883: 95.7323% ( 27) 00:07:44.953 10989.883 - 11040.295: 95.8197% ( 15) 00:07:44.953 11040.295 - 11090.708: 95.9188% ( 17) 00:07:44.953 11090.708 - 11141.120: 96.0121% ( 16) 00:07:44.953 11141.120 - 11191.532: 96.1171% ( 18) 00:07:44.953 11191.532 - 11241.945: 96.2162% ( 17) 00:07:44.953 11241.945 - 11292.357: 96.3036% ( 15) 00:07:44.953 11292.357 - 11342.769: 96.4436% ( 24) 00:07:44.953 11342.769 - 11393.182: 96.5777% ( 23) 00:07:44.953 11393.182 - 11443.594: 96.6943% ( 20) 00:07:44.953 11443.594 - 11494.006: 96.8225% ( 22) 00:07:44.953 11494.006 - 11544.418: 96.9508% ( 22) 00:07:44.953 11544.418 - 11594.831: 97.0907% ( 24) 00:07:44.953 11594.831 - 11645.243: 97.2306% ( 24) 00:07:44.953 11645.243 - 11695.655: 97.3822% ( 26) 00:07:44.953 11695.655 - 11746.068: 97.5338% ( 26) 00:07:44.953 11746.068 - 11796.480: 97.6912% ( 27) 00:07:44.953 11796.480 - 11846.892: 97.8137% ( 21) 00:07:44.953 11846.892 - 11897.305: 97.8895% ( 13) 00:07:44.953 11897.305 - 11947.717: 97.9769% ( 15) 00:07:44.953 11947.717 - 11998.129: 98.0644% ( 15) 00:07:44.953 11998.129 - 12048.542: 98.1402% ( 13) 00:07:44.953 12048.542 - 12098.954: 98.2334% ( 16) 00:07:44.953 12098.954 - 12149.366: 98.3151% ( 14) 00:07:44.953 12149.366 - 12199.778: 98.3967% ( 14) 00:07:44.953 12199.778 - 12250.191: 98.4783% ( 14) 00:07:44.953 12250.191 - 12300.603: 98.5424% ( 11) 00:07:44.953 12300.603 - 12351.015: 98.6124% ( 12) 00:07:44.953 12351.015 - 12401.428: 98.6765% ( 11) 00:07:44.953 12401.428 - 12451.840: 98.7465% ( 12) 00:07:44.953 12451.840 - 12502.252: 98.8165% ( 12) 00:07:44.953 12502.252 - 12552.665: 98.8806% ( 11) 00:07:44.953 12552.665 - 12603.077: 98.9214% ( 7) 00:07:44.953 12603.077 - 12653.489: 98.9564% ( 6) 00:07:44.953 12653.489 - 12703.902: 98.9914% ( 6) 00:07:44.953 12703.902 - 12754.314: 99.0205% ( 5) 00:07:44.953 12754.314 - 12804.726: 99.0555% ( 6) 00:07:44.953 12804.726 - 12855.138: 99.0905% ( 6) 00:07:44.953 12855.138 - 12905.551: 99.1255% ( 6) 00:07:44.953 12905.551 - 13006.375: 99.1896% ( 11) 00:07:44.953 13006.375 - 13107.200: 99.2304% ( 7) 00:07:44.953 13107.200 - 13208.025: 99.2479% ( 3) 00:07:44.953 13208.025 - 13308.849: 99.2537% ( 1) 00:07:44.953 21273.994 - 21374.818: 99.2771% ( 4) 00:07:44.953 21374.818 - 21475.643: 99.3004% ( 4) 00:07:44.953 21475.643 - 21576.468: 99.3237% ( 4) 00:07:44.953 21576.468 - 21677.292: 99.3528% ( 5) 00:07:44.953 21677.292 - 21778.117: 99.3762% ( 4) 00:07:44.953 21778.117 - 21878.942: 99.3995% ( 4) 00:07:44.953 21878.942 - 21979.766: 99.4228% ( 4) 00:07:44.953 21979.766 - 22080.591: 99.4520% ( 5) 00:07:44.953 22080.591 - 22181.415: 99.4753% ( 4) 00:07:44.953 22181.415 - 22282.240: 99.4986% ( 4) 00:07:44.953 22282.240 - 22383.065: 99.5219% ( 4) 00:07:44.953 22383.065 - 22483.889: 99.5452% ( 4) 00:07:44.953 22483.889 - 22584.714: 99.5686% ( 4) 00:07:44.953 22584.714 - 22685.538: 99.5919% ( 4) 00:07:44.953 22685.538 - 22786.363: 99.6152% ( 4) 00:07:44.953 22786.363 - 22887.188: 99.6269% ( 2) 00:07:44.953 26012.751 - 26214.400: 99.6677% ( 7) 00:07:44.953 26214.400 - 26416.049: 99.7201% ( 9) 00:07:44.953 26416.049 - 26617.698: 99.7668% ( 8) 00:07:44.953 26617.698 - 26819.348: 99.8134% ( 8) 00:07:44.953 26819.348 - 27020.997: 99.8659% ( 9) 00:07:44.953 27020.997 - 27222.646: 99.9125% ( 8) 00:07:44.953 27222.646 - 27424.295: 99.9592% ( 8) 00:07:44.953 27424.295 - 27625.945: 100.0000% ( 7) 00:07:44.953 00:07:44.954 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:44.954 ============================================================================== 00:07:44.954 Range in us Cumulative IO count 00:07:44.954 5847.828 - 5873.034: 0.0058% ( 1) 00:07:44.954 5873.034 - 5898.240: 0.0175% ( 2) 00:07:44.954 5898.240 - 5923.446: 0.0233% ( 1) 00:07:44.954 5923.446 - 5948.652: 0.0466% ( 4) 00:07:44.954 5948.652 - 5973.858: 0.1049% ( 10) 00:07:44.954 5973.858 - 5999.065: 0.1399% ( 6) 00:07:44.954 5999.065 - 6024.271: 0.1749% ( 6) 00:07:44.954 6024.271 - 6049.477: 0.2624% ( 15) 00:07:44.954 6049.477 - 6074.683: 0.4023% ( 24) 00:07:44.954 6074.683 - 6099.889: 0.5889% ( 32) 00:07:44.954 6099.889 - 6125.095: 0.8221% ( 40) 00:07:44.954 6125.095 - 6150.302: 1.0611% ( 41) 00:07:44.954 6150.302 - 6175.508: 1.3584% ( 51) 00:07:44.954 6175.508 - 6200.714: 1.7957% ( 75) 00:07:44.954 6200.714 - 6225.920: 2.3088% ( 88) 00:07:44.954 6225.920 - 6251.126: 2.7460% ( 75) 00:07:44.954 6251.126 - 6276.332: 3.2183% ( 81) 00:07:44.954 6276.332 - 6301.538: 3.8713% ( 112) 00:07:44.954 6301.538 - 6326.745: 4.6117% ( 127) 00:07:44.954 6326.745 - 6351.951: 5.5154% ( 155) 00:07:44.954 6351.951 - 6377.157: 6.4191% ( 155) 00:07:44.954 6377.157 - 6402.363: 7.3927% ( 167) 00:07:44.954 6402.363 - 6427.569: 8.5646% ( 201) 00:07:44.954 6427.569 - 6452.775: 9.9464% ( 237) 00:07:44.954 6452.775 - 6503.188: 12.7449% ( 480) 00:07:44.954 6503.188 - 6553.600: 16.0331% ( 564) 00:07:44.954 6553.600 - 6604.012: 19.6712% ( 624) 00:07:44.954 6604.012 - 6654.425: 23.4725% ( 652) 00:07:44.954 6654.425 - 6704.837: 27.5245% ( 695) 00:07:44.954 6704.837 - 6755.249: 31.6756% ( 712) 00:07:44.954 6755.249 - 6805.662: 36.0833% ( 756) 00:07:44.954 6805.662 - 6856.074: 40.6367% ( 781) 00:07:44.954 6856.074 - 6906.486: 45.0676% ( 760) 00:07:44.954 6906.486 - 6956.898: 49.3237% ( 730) 00:07:44.954 6956.898 - 7007.311: 53.5098% ( 718) 00:07:44.954 7007.311 - 7057.723: 57.5326% ( 690) 00:07:44.954 7057.723 - 7108.135: 61.5322% ( 686) 00:07:44.954 7108.135 - 7158.548: 65.2694% ( 641) 00:07:44.954 7158.548 - 7208.960: 68.6625% ( 582) 00:07:44.954 7208.960 - 7259.372: 71.4960% ( 486) 00:07:44.954 7259.372 - 7309.785: 73.9097% ( 414) 00:07:44.954 7309.785 - 7360.197: 75.9970% ( 358) 00:07:44.954 7360.197 - 7410.609: 77.7985% ( 309) 00:07:44.954 7410.609 - 7461.022: 79.3027% ( 258) 00:07:44.954 7461.022 - 7511.434: 80.5096% ( 207) 00:07:44.954 7511.434 - 7561.846: 81.5299% ( 175) 00:07:44.954 7561.846 - 7612.258: 82.3986% ( 149) 00:07:44.954 7612.258 - 7662.671: 83.0807% ( 117) 00:07:44.954 7662.671 - 7713.083: 83.6462% ( 97) 00:07:44.954 7713.083 - 7763.495: 84.1593% ( 88) 00:07:44.954 7763.495 - 7813.908: 84.5441% ( 66) 00:07:44.954 7813.908 - 7864.320: 84.8706% ( 56) 00:07:44.954 7864.320 - 7914.732: 85.1329% ( 45) 00:07:44.954 7914.732 - 7965.145: 85.4011% ( 46) 00:07:44.954 7965.145 - 8015.557: 85.6402% ( 41) 00:07:44.954 8015.557 - 8065.969: 85.8967% ( 44) 00:07:44.954 8065.969 - 8116.382: 86.1066% ( 36) 00:07:44.954 8116.382 - 8166.794: 86.3165% ( 36) 00:07:44.954 8166.794 - 8217.206: 86.4855% ( 29) 00:07:44.954 8217.206 - 8267.618: 86.6430% ( 27) 00:07:44.954 8267.618 - 8318.031: 86.8004% ( 27) 00:07:44.954 8318.031 - 8368.443: 86.9578% ( 27) 00:07:44.954 8368.443 - 8418.855: 87.1152% ( 27) 00:07:44.954 8418.855 - 8469.268: 87.2726% ( 27) 00:07:44.954 8469.268 - 8519.680: 87.4475% ( 30) 00:07:44.954 8519.680 - 8570.092: 87.6632% ( 37) 00:07:44.954 8570.092 - 8620.505: 87.8731% ( 36) 00:07:44.954 8620.505 - 8670.917: 88.1180% ( 42) 00:07:44.954 8670.917 - 8721.329: 88.3570% ( 41) 00:07:44.954 8721.329 - 8771.742: 88.5844% ( 39) 00:07:44.954 8771.742 - 8822.154: 88.8060% ( 38) 00:07:44.954 8822.154 - 8872.566: 89.0217% ( 37) 00:07:44.954 8872.566 - 8922.978: 89.2199% ( 34) 00:07:44.954 8922.978 - 8973.391: 89.4356% ( 37) 00:07:44.954 8973.391 - 9023.803: 89.6863% ( 43) 00:07:44.954 9023.803 - 9074.215: 89.9137% ( 39) 00:07:44.954 9074.215 - 9124.628: 90.1236% ( 36) 00:07:44.954 9124.628 - 9175.040: 90.3568% ( 40) 00:07:44.954 9175.040 - 9225.452: 90.5900% ( 40) 00:07:44.954 9225.452 - 9275.865: 90.8232% ( 40) 00:07:44.954 9275.865 - 9326.277: 91.0564% ( 40) 00:07:44.954 9326.277 - 9376.689: 91.2955% ( 41) 00:07:44.954 9376.689 - 9427.102: 91.4879% ( 33) 00:07:44.954 9427.102 - 9477.514: 91.6919% ( 35) 00:07:44.954 9477.514 - 9527.926: 91.8610% ( 29) 00:07:44.954 9527.926 - 9578.338: 92.0476% ( 32) 00:07:44.954 9578.338 - 9628.751: 92.2458% ( 34) 00:07:44.954 9628.751 - 9679.163: 92.4207% ( 30) 00:07:44.954 9679.163 - 9729.575: 92.5898% ( 29) 00:07:44.954 9729.575 - 9779.988: 92.7647% ( 30) 00:07:44.954 9779.988 - 9830.400: 92.9279% ( 28) 00:07:44.954 9830.400 - 9880.812: 93.0562% ( 22) 00:07:44.954 9880.812 - 9931.225: 93.1845% ( 22) 00:07:44.954 9931.225 - 9981.637: 93.3477% ( 28) 00:07:44.954 9981.637 - 10032.049: 93.4760% ( 22) 00:07:44.954 10032.049 - 10082.462: 93.6625% ( 32) 00:07:44.954 10082.462 - 10132.874: 93.8433% ( 31) 00:07:44.954 10132.874 - 10183.286: 94.0240% ( 31) 00:07:44.954 10183.286 - 10233.698: 94.2048% ( 31) 00:07:44.954 10233.698 - 10284.111: 94.3680% ( 28) 00:07:44.954 10284.111 - 10334.523: 94.5546% ( 32) 00:07:44.954 10334.523 - 10384.935: 94.6945% ( 24) 00:07:44.954 10384.935 - 10435.348: 94.8228% ( 22) 00:07:44.954 10435.348 - 10485.760: 94.9452% ( 21) 00:07:44.954 10485.760 - 10536.172: 95.0560% ( 19) 00:07:44.954 10536.172 - 10586.585: 95.1667% ( 19) 00:07:44.954 10586.585 - 10636.997: 95.2717% ( 18) 00:07:44.954 10636.997 - 10687.409: 95.3591% ( 15) 00:07:44.954 10687.409 - 10737.822: 95.4524% ( 16) 00:07:44.954 10737.822 - 10788.234: 95.5340% ( 14) 00:07:44.954 10788.234 - 10838.646: 95.6098% ( 13) 00:07:44.954 10838.646 - 10889.058: 95.6856% ( 13) 00:07:44.954 10889.058 - 10939.471: 95.7381% ( 9) 00:07:44.954 10939.471 - 10989.883: 95.8022% ( 11) 00:07:44.954 10989.883 - 11040.295: 95.8955% ( 16) 00:07:44.954 11040.295 - 11090.708: 95.9771% ( 14) 00:07:44.954 11090.708 - 11141.120: 96.0704% ( 16) 00:07:44.954 11141.120 - 11191.532: 96.1521% ( 14) 00:07:44.954 11191.532 - 11241.945: 96.2453% ( 16) 00:07:44.954 11241.945 - 11292.357: 96.3270% ( 14) 00:07:44.954 11292.357 - 11342.769: 96.4494% ( 21) 00:07:44.954 11342.769 - 11393.182: 96.5951% ( 25) 00:07:44.954 11393.182 - 11443.594: 96.7467% ( 26) 00:07:44.954 11443.594 - 11494.006: 96.9100% ( 28) 00:07:44.954 11494.006 - 11544.418: 97.0499% ( 24) 00:07:44.954 11544.418 - 11594.831: 97.1957% ( 25) 00:07:44.954 11594.831 - 11645.243: 97.3472% ( 26) 00:07:44.954 11645.243 - 11695.655: 97.4988% ( 26) 00:07:44.954 11695.655 - 11746.068: 97.6679% ( 29) 00:07:44.954 11746.068 - 11796.480: 97.8195% ( 26) 00:07:44.954 11796.480 - 11846.892: 97.9128% ( 16) 00:07:44.954 11846.892 - 11897.305: 98.0236% ( 19) 00:07:44.954 11897.305 - 11947.717: 98.1227% ( 17) 00:07:44.954 11947.717 - 11998.129: 98.2160% ( 16) 00:07:44.954 11998.129 - 12048.542: 98.3151% ( 17) 00:07:44.954 12048.542 - 12098.954: 98.4025% ( 15) 00:07:44.954 12098.954 - 12149.366: 98.4725% ( 12) 00:07:44.954 12149.366 - 12199.778: 98.5366% ( 11) 00:07:44.954 12199.778 - 12250.191: 98.6066% ( 12) 00:07:44.954 12250.191 - 12300.603: 98.6765% ( 12) 00:07:44.954 12300.603 - 12351.015: 98.7174% ( 7) 00:07:44.954 12351.015 - 12401.428: 98.7582% ( 7) 00:07:44.954 12401.428 - 12451.840: 98.8106% ( 9) 00:07:44.954 12451.840 - 12502.252: 98.8514% ( 7) 00:07:44.954 12502.252 - 12552.665: 98.8806% ( 5) 00:07:44.954 12552.665 - 12603.077: 98.9156% ( 6) 00:07:44.954 12603.077 - 12653.489: 98.9447% ( 5) 00:07:44.954 12653.489 - 12703.902: 98.9739% ( 5) 00:07:44.954 12703.902 - 12754.314: 99.0030% ( 5) 00:07:44.954 12754.314 - 12804.726: 99.0380% ( 6) 00:07:44.954 12804.726 - 12855.138: 99.0730% ( 6) 00:07:44.954 12855.138 - 12905.551: 99.1080% ( 6) 00:07:44.954 12905.551 - 13006.375: 99.1838% ( 13) 00:07:44.954 13006.375 - 13107.200: 99.2188% ( 6) 00:07:44.954 13107.200 - 13208.025: 99.2362% ( 3) 00:07:44.954 13208.025 - 13308.849: 99.2537% ( 3) 00:07:44.954 19459.151 - 19559.975: 99.2596% ( 1) 00:07:44.954 19559.975 - 19660.800: 99.2829% ( 4) 00:07:44.954 19660.800 - 19761.625: 99.3062% ( 4) 00:07:44.954 19761.625 - 19862.449: 99.3237% ( 3) 00:07:44.954 19862.449 - 19963.274: 99.3470% ( 4) 00:07:44.954 19963.274 - 20064.098: 99.3762% ( 5) 00:07:44.954 20064.098 - 20164.923: 99.3995% ( 4) 00:07:44.954 20164.923 - 20265.748: 99.4228% ( 4) 00:07:44.954 20265.748 - 20366.572: 99.4461% ( 4) 00:07:44.954 20366.572 - 20467.397: 99.4694% ( 4) 00:07:44.954 20467.397 - 20568.222: 99.4928% ( 4) 00:07:44.955 20568.222 - 20669.046: 99.5219% ( 5) 00:07:44.955 20669.046 - 20769.871: 99.5452% ( 4) 00:07:44.955 20769.871 - 20870.695: 99.5686% ( 4) 00:07:44.955 20870.695 - 20971.520: 99.5919% ( 4) 00:07:44.955 20971.520 - 21072.345: 99.6152% ( 4) 00:07:44.955 21072.345 - 21173.169: 99.6269% ( 2) 00:07:44.955 24298.732 - 24399.557: 99.6502% ( 4) 00:07:44.955 24399.557 - 24500.382: 99.6677% ( 3) 00:07:44.955 24500.382 - 24601.206: 99.6968% ( 5) 00:07:44.955 24601.206 - 24702.031: 99.7201% ( 4) 00:07:44.955 24702.031 - 24802.855: 99.7435% ( 4) 00:07:44.955 24802.855 - 24903.680: 99.7668% ( 4) 00:07:44.955 24903.680 - 25004.505: 99.7785% ( 2) 00:07:44.955 25004.505 - 25105.329: 99.8018% ( 4) 00:07:44.955 25105.329 - 25206.154: 99.8251% ( 4) 00:07:44.955 25206.154 - 25306.978: 99.8542% ( 5) 00:07:44.955 25306.978 - 25407.803: 99.8776% ( 4) 00:07:44.955 25407.803 - 25508.628: 99.9009% ( 4) 00:07:44.955 25508.628 - 25609.452: 99.9242% ( 4) 00:07:44.955 25609.452 - 25710.277: 99.9475% ( 4) 00:07:44.955 25710.277 - 25811.102: 99.9708% ( 4) 00:07:44.955 25811.102 - 26012.751: 100.0000% ( 5) 00:07:44.955 00:07:44.955 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:44.955 ============================================================================== 00:07:44.955 Range in us Cumulative IO count 00:07:44.955 5772.209 - 5797.415: 0.0116% ( 2) 00:07:44.955 5797.415 - 5822.622: 0.0174% ( 1) 00:07:44.955 5822.622 - 5847.828: 0.0290% ( 2) 00:07:44.955 5847.828 - 5873.034: 0.0407% ( 2) 00:07:44.955 5873.034 - 5898.240: 0.0465% ( 1) 00:07:44.955 5898.240 - 5923.446: 0.0639% ( 3) 00:07:44.955 5923.446 - 5948.652: 0.0813% ( 3) 00:07:44.955 5948.652 - 5973.858: 0.1452% ( 11) 00:07:44.955 5973.858 - 5999.065: 0.2265% ( 14) 00:07:44.955 5999.065 - 6024.271: 0.2730% ( 8) 00:07:44.955 6024.271 - 6049.477: 0.3369% ( 11) 00:07:44.955 6049.477 - 6074.683: 0.4298% ( 16) 00:07:44.955 6074.683 - 6099.889: 0.5692% ( 24) 00:07:44.955 6099.889 - 6125.095: 0.7493% ( 31) 00:07:44.955 6125.095 - 6150.302: 0.9991% ( 43) 00:07:44.955 6150.302 - 6175.508: 1.2837% ( 49) 00:07:44.955 6175.508 - 6200.714: 1.6032% ( 55) 00:07:44.955 6200.714 - 6225.920: 1.9575% ( 61) 00:07:44.955 6225.920 - 6251.126: 2.4164% ( 79) 00:07:44.955 6251.126 - 6276.332: 3.0379% ( 107) 00:07:44.955 6276.332 - 6301.538: 3.8220% ( 135) 00:07:44.955 6301.538 - 6326.745: 4.5713% ( 129) 00:07:44.955 6326.745 - 6351.951: 5.3903% ( 141) 00:07:44.955 6351.951 - 6377.157: 6.3139% ( 159) 00:07:44.955 6377.157 - 6402.363: 7.2897% ( 168) 00:07:44.955 6402.363 - 6427.569: 8.4689% ( 203) 00:07:44.955 6427.569 - 6452.775: 9.6887% ( 210) 00:07:44.955 6452.775 - 6503.188: 12.5290% ( 489) 00:07:44.955 6503.188 - 6553.600: 15.6831% ( 543) 00:07:44.955 6553.600 - 6604.012: 19.1856% ( 603) 00:07:44.955 6604.012 - 6654.425: 23.0948% ( 673) 00:07:44.955 6654.425 - 6704.837: 27.1550% ( 699) 00:07:44.955 6704.837 - 6755.249: 31.3081% ( 715) 00:07:44.955 6755.249 - 6805.662: 35.6413% ( 746) 00:07:44.955 6805.662 - 6856.074: 40.0790% ( 764) 00:07:44.955 6856.074 - 6906.486: 44.5051% ( 762) 00:07:44.955 6906.486 - 6956.898: 48.9661% ( 768) 00:07:44.955 6956.898 - 7007.311: 53.2528% ( 738) 00:07:44.955 7007.311 - 7057.723: 57.4291% ( 719) 00:07:44.955 7057.723 - 7108.135: 61.3731% ( 679) 00:07:44.955 7108.135 - 7158.548: 64.9512% ( 616) 00:07:44.955 7158.548 - 7208.960: 68.2272% ( 564) 00:07:44.955 7208.960 - 7259.372: 71.0502% ( 486) 00:07:44.955 7259.372 - 7309.785: 73.5479% ( 430) 00:07:44.955 7309.785 - 7360.197: 75.6970% ( 370) 00:07:44.955 7360.197 - 7410.609: 77.5674% ( 322) 00:07:44.955 7410.609 - 7461.022: 79.0602% ( 257) 00:07:44.955 7461.022 - 7511.434: 80.3206% ( 217) 00:07:44.955 7511.434 - 7561.846: 81.4475% ( 194) 00:07:44.955 7561.846 - 7612.258: 82.3769% ( 160) 00:07:44.955 7612.258 - 7662.671: 83.1784% ( 138) 00:07:44.955 7662.671 - 7713.083: 83.8290% ( 112) 00:07:44.955 7713.083 - 7763.495: 84.3460% ( 89) 00:07:44.955 7763.495 - 7813.908: 84.7351% ( 67) 00:07:44.955 7813.908 - 7864.320: 85.0895% ( 61) 00:07:44.955 7864.320 - 7914.732: 85.4322% ( 59) 00:07:44.955 7914.732 - 7965.145: 85.7691% ( 58) 00:07:44.955 7965.145 - 8015.557: 86.0769% ( 53) 00:07:44.955 8015.557 - 8065.969: 86.3731% ( 51) 00:07:44.955 8065.969 - 8116.382: 86.6578% ( 49) 00:07:44.955 8116.382 - 8166.794: 86.9366% ( 48) 00:07:44.955 8166.794 - 8217.206: 87.1921% ( 44) 00:07:44.955 8217.206 - 8267.618: 87.4013% ( 36) 00:07:44.955 8267.618 - 8318.031: 87.6278% ( 39) 00:07:44.955 8318.031 - 8368.443: 87.8253% ( 34) 00:07:44.955 8368.443 - 8418.855: 88.0112% ( 32) 00:07:44.955 8418.855 - 8469.268: 88.1970% ( 32) 00:07:44.955 8469.268 - 8519.680: 88.3597% ( 28) 00:07:44.955 8519.680 - 8570.092: 88.5281% ( 29) 00:07:44.955 8570.092 - 8620.505: 88.6849% ( 27) 00:07:44.955 8620.505 - 8670.917: 88.8766% ( 33) 00:07:44.955 8670.917 - 8721.329: 89.0567% ( 31) 00:07:44.955 8721.329 - 8771.742: 89.2135% ( 27) 00:07:44.955 8771.742 - 8822.154: 89.3587% ( 25) 00:07:44.955 8822.154 - 8872.566: 89.5156% ( 27) 00:07:44.955 8872.566 - 8922.978: 89.6666% ( 26) 00:07:44.955 8922.978 - 8973.391: 89.8467% ( 31) 00:07:44.955 8973.391 - 9023.803: 90.0151% ( 29) 00:07:44.955 9023.803 - 9074.215: 90.1777% ( 28) 00:07:44.955 9074.215 - 9124.628: 90.3810% ( 35) 00:07:44.955 9124.628 - 9175.040: 90.5669% ( 32) 00:07:44.955 9175.040 - 9225.452: 90.7470% ( 31) 00:07:44.955 9225.452 - 9275.865: 90.9154% ( 29) 00:07:44.955 9275.865 - 9326.277: 91.0955% ( 31) 00:07:44.955 9326.277 - 9376.689: 91.2639% ( 29) 00:07:44.955 9376.689 - 9427.102: 91.4266% ( 28) 00:07:44.955 9427.102 - 9477.514: 91.5718% ( 25) 00:07:44.955 9477.514 - 9527.926: 91.6822% ( 19) 00:07:44.955 9527.926 - 9578.338: 91.7809% ( 17) 00:07:44.955 9578.338 - 9628.751: 91.8796% ( 17) 00:07:44.955 9628.751 - 9679.163: 91.9784% ( 17) 00:07:44.955 9679.163 - 9729.575: 92.0829% ( 18) 00:07:44.955 9729.575 - 9779.988: 92.1933% ( 19) 00:07:44.955 9779.988 - 9830.400: 92.3037% ( 19) 00:07:44.955 9830.400 - 9880.812: 92.3734% ( 12) 00:07:44.955 9880.812 - 9931.225: 92.4489% ( 13) 00:07:44.955 9931.225 - 9981.637: 92.5592% ( 19) 00:07:44.955 9981.637 - 10032.049: 92.6812% ( 21) 00:07:44.955 10032.049 - 10082.462: 92.8206% ( 24) 00:07:44.955 10082.462 - 10132.874: 92.9426% ( 21) 00:07:44.955 10132.874 - 10183.286: 93.0704% ( 22) 00:07:44.955 10183.286 - 10233.698: 93.2505% ( 31) 00:07:44.955 10233.698 - 10284.111: 93.3899% ( 24) 00:07:44.955 10284.111 - 10334.523: 93.5409% ( 26) 00:07:44.955 10334.523 - 10384.935: 93.6919% ( 26) 00:07:44.955 10384.935 - 10435.348: 93.8720% ( 31) 00:07:44.955 10435.348 - 10485.760: 94.0695% ( 34) 00:07:44.955 10485.760 - 10536.172: 94.2670% ( 34) 00:07:44.955 10536.172 - 10586.585: 94.4703% ( 35) 00:07:44.955 10586.585 - 10636.997: 94.6561% ( 32) 00:07:44.955 10636.997 - 10687.409: 94.8304% ( 30) 00:07:44.955 10687.409 - 10737.822: 95.0105% ( 31) 00:07:44.955 10737.822 - 10788.234: 95.2021% ( 33) 00:07:44.955 10788.234 - 10838.646: 95.3590% ( 27) 00:07:44.955 10838.646 - 10889.058: 95.5100% ( 26) 00:07:44.955 10889.058 - 10939.471: 95.6610% ( 26) 00:07:44.955 10939.471 - 10989.883: 95.8469% ( 32) 00:07:44.955 10989.883 - 11040.295: 96.0095% ( 28) 00:07:44.955 11040.295 - 11090.708: 96.1896% ( 31) 00:07:44.955 11090.708 - 11141.120: 96.3813% ( 33) 00:07:44.955 11141.120 - 11191.532: 96.5671% ( 32) 00:07:44.955 11191.532 - 11241.945: 96.7588% ( 33) 00:07:44.955 11241.945 - 11292.357: 96.9215% ( 28) 00:07:44.955 11292.357 - 11342.769: 97.0783% ( 27) 00:07:44.955 11342.769 - 11393.182: 97.2351% ( 27) 00:07:44.955 11393.182 - 11443.594: 97.3745% ( 24) 00:07:44.955 11443.594 - 11494.006: 97.5372% ( 28) 00:07:44.955 11494.006 - 11544.418: 97.6533% ( 20) 00:07:44.955 11544.418 - 11594.831: 97.7753% ( 21) 00:07:44.955 11594.831 - 11645.243: 97.8799% ( 18) 00:07:44.955 11645.243 - 11695.655: 97.9844% ( 18) 00:07:44.955 11695.655 - 11746.068: 98.0658% ( 14) 00:07:44.955 11746.068 - 11796.480: 98.1703% ( 18) 00:07:44.955 11796.480 - 11846.892: 98.2400% ( 12) 00:07:44.955 11846.892 - 11897.305: 98.2923% ( 9) 00:07:44.955 11897.305 - 11947.717: 98.3388% ( 8) 00:07:44.955 11947.717 - 11998.129: 98.3620% ( 4) 00:07:44.955 11998.129 - 12048.542: 98.3852% ( 4) 00:07:44.955 12048.542 - 12098.954: 98.4026% ( 3) 00:07:44.955 12098.954 - 12149.366: 98.4201% ( 3) 00:07:44.955 12149.366 - 12199.778: 98.4433% ( 4) 00:07:44.955 12199.778 - 12250.191: 98.4607% ( 3) 00:07:44.955 12250.191 - 12300.603: 98.4840% ( 4) 00:07:44.955 12300.603 - 12351.015: 98.5362% ( 9) 00:07:44.955 12351.015 - 12401.428: 98.5595% ( 4) 00:07:44.955 12401.428 - 12451.840: 98.5943% ( 6) 00:07:44.955 12451.840 - 12502.252: 98.6234% ( 5) 00:07:44.955 12502.252 - 12552.665: 98.6466% ( 4) 00:07:44.955 12552.665 - 12603.077: 98.6815% ( 6) 00:07:44.955 12603.077 - 12653.489: 98.7105% ( 5) 00:07:44.955 12653.489 - 12703.902: 98.7570% ( 8) 00:07:44.955 12703.902 - 12754.314: 98.8034% ( 8) 00:07:44.955 12754.314 - 12804.726: 98.8383% ( 6) 00:07:44.955 12804.726 - 12855.138: 98.8615% ( 4) 00:07:44.955 12855.138 - 12905.551: 98.8964% ( 6) 00:07:44.955 12905.551 - 13006.375: 98.9661% ( 12) 00:07:44.955 13006.375 - 13107.200: 99.0416% ( 13) 00:07:44.955 13107.200 - 13208.025: 99.0939% ( 9) 00:07:44.955 13208.025 - 13308.849: 99.1345% ( 7) 00:07:44.955 13308.849 - 13409.674: 99.1810% ( 8) 00:07:44.955 13409.674 - 13510.498: 99.2275% ( 8) 00:07:44.955 13510.498 - 13611.323: 99.2565% ( 5) 00:07:44.955 13712.148 - 13812.972: 99.2623% ( 1) 00:07:44.955 13812.972 - 13913.797: 99.2855% ( 4) 00:07:44.955 13913.797 - 14014.622: 99.3088% ( 4) 00:07:44.955 14014.622 - 14115.446: 99.3320% ( 4) 00:07:44.955 14115.446 - 14216.271: 99.3553% ( 4) 00:07:44.955 14216.271 - 14317.095: 99.3843% ( 5) 00:07:44.955 14317.095 - 14417.920: 99.4075% ( 4) 00:07:44.955 14417.920 - 14518.745: 99.4308% ( 4) 00:07:44.955 14518.745 - 14619.569: 99.4540% ( 4) 00:07:44.955 14619.569 - 14720.394: 99.4772% ( 4) 00:07:44.955 14720.394 - 14821.218: 99.5005% ( 4) 00:07:44.956 14821.218 - 14922.043: 99.5237% ( 4) 00:07:44.956 14922.043 - 15022.868: 99.5527% ( 5) 00:07:44.956 15022.868 - 15123.692: 99.5760% ( 4) 00:07:44.956 15123.692 - 15224.517: 99.5992% ( 4) 00:07:44.956 15224.517 - 15325.342: 99.6283% ( 5) 00:07:44.956 18854.203 - 18955.028: 99.6457% ( 3) 00:07:44.956 18955.028 - 19055.852: 99.6689% ( 4) 00:07:44.956 19055.852 - 19156.677: 99.6921% ( 4) 00:07:44.956 19156.677 - 19257.502: 99.7154% ( 4) 00:07:44.956 19257.502 - 19358.326: 99.7386% ( 4) 00:07:44.956 19358.326 - 19459.151: 99.7618% ( 4) 00:07:44.956 19459.151 - 19559.975: 99.7909% ( 5) 00:07:44.956 19559.975 - 19660.800: 99.8141% ( 4) 00:07:44.956 19660.800 - 19761.625: 99.8374% ( 4) 00:07:44.956 19761.625 - 19862.449: 99.8606% ( 4) 00:07:44.956 19862.449 - 19963.274: 99.8838% ( 4) 00:07:44.956 19963.274 - 20064.098: 99.9129% ( 5) 00:07:44.956 20064.098 - 20164.923: 99.9361% ( 4) 00:07:44.956 20164.923 - 20265.748: 99.9593% ( 4) 00:07:44.956 20265.748 - 20366.572: 99.9826% ( 4) 00:07:44.956 20366.572 - 20467.397: 100.0000% ( 3) 00:07:44.956 00:07:44.956 11:22:29 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:45.903 Initializing NVMe Controllers 00:07:45.903 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:45.903 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:45.903 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:45.903 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:45.903 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:45.903 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:45.903 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:45.903 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:45.903 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:45.903 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:45.903 Initialization complete. Launching workers. 00:07:45.903 ======================================================== 00:07:45.903 Latency(us) 00:07:45.903 Device Information : IOPS MiB/s Average min max 00:07:45.903 PCIE (0000:00:10.0) NSID 1 from core 0: 16035.63 187.92 7993.76 5680.35 31564.12 00:07:45.903 PCIE (0000:00:11.0) NSID 1 from core 0: 16035.63 187.92 7981.21 5817.36 29630.95 00:07:45.903 PCIE (0000:00:13.0) NSID 1 from core 0: 16035.63 187.92 7968.57 5616.75 28102.13 00:07:45.903 PCIE (0000:00:12.0) NSID 1 from core 0: 16035.63 187.92 7956.07 5695.51 26232.26 00:07:45.903 PCIE (0000:00:12.0) NSID 2 from core 0: 16035.63 187.92 7943.44 5833.99 24355.41 00:07:45.903 PCIE (0000:00:12.0) NSID 3 from core 0: 16099.52 188.67 7899.21 5748.77 19079.74 00:07:45.903 ======================================================== 00:07:45.903 Total : 96277.68 1128.25 7957.01 5616.75 31564.12 00:07:45.903 00:07:45.903 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:45.903 ================================================================================= 00:07:45.903 1.00000% : 6326.745us 00:07:45.903 10.00000% : 6906.486us 00:07:45.903 25.00000% : 7108.135us 00:07:45.903 50.00000% : 7511.434us 00:07:45.903 75.00000% : 8166.794us 00:07:45.903 90.00000% : 9477.514us 00:07:45.903 95.00000% : 10636.997us 00:07:45.903 98.00000% : 12149.366us 00:07:45.903 99.00000% : 13712.148us 00:07:45.903 99.50000% : 25710.277us 00:07:45.903 99.90000% : 31255.631us 00:07:45.903 99.99000% : 31658.929us 00:07:45.903 99.99900% : 31658.929us 00:07:45.903 99.99990% : 31658.929us 00:07:45.903 99.99999% : 31658.929us 00:07:45.903 00:07:45.903 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:45.903 ================================================================================= 00:07:45.903 1.00000% : 6402.363us 00:07:45.903 10.00000% : 7007.311us 00:07:45.903 25.00000% : 7208.960us 00:07:45.903 50.00000% : 7461.022us 00:07:45.903 75.00000% : 8166.794us 00:07:45.903 90.00000% : 9427.102us 00:07:45.903 95.00000% : 10586.585us 00:07:45.903 98.00000% : 12048.542us 00:07:45.903 99.00000% : 13006.375us 00:07:45.904 99.50000% : 23895.434us 00:07:45.904 99.90000% : 29239.138us 00:07:45.904 99.99000% : 29642.437us 00:07:45.904 99.99900% : 29642.437us 00:07:45.904 99.99990% : 29642.437us 00:07:45.904 99.99999% : 29642.437us 00:07:45.904 00:07:45.904 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:45.904 ================================================================================= 00:07:45.904 1.00000% : 6326.745us 00:07:45.904 10.00000% : 6956.898us 00:07:45.904 25.00000% : 7158.548us 00:07:45.904 50.00000% : 7461.022us 00:07:45.904 75.00000% : 8166.794us 00:07:45.904 90.00000% : 9376.689us 00:07:45.904 95.00000% : 10687.409us 00:07:45.904 98.00000% : 12351.015us 00:07:45.904 99.00000% : 13208.025us 00:07:45.904 99.50000% : 23088.837us 00:07:45.904 99.90000% : 27827.594us 00:07:45.904 99.99000% : 28230.892us 00:07:45.904 99.99900% : 28230.892us 00:07:45.904 99.99990% : 28230.892us 00:07:45.904 99.99999% : 28230.892us 00:07:45.904 00:07:45.904 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:45.904 ================================================================================= 00:07:45.904 1.00000% : 6402.363us 00:07:45.904 10.00000% : 7007.311us 00:07:45.904 25.00000% : 7208.960us 00:07:45.904 50.00000% : 7461.022us 00:07:45.904 75.00000% : 8166.794us 00:07:45.904 90.00000% : 9427.102us 00:07:45.904 95.00000% : 10636.997us 00:07:45.904 98.00000% : 12451.840us 00:07:45.904 99.00000% : 13812.972us 00:07:45.904 99.50000% : 21173.169us 00:07:45.904 99.90000% : 26012.751us 00:07:45.904 99.99000% : 26214.400us 00:07:45.904 99.99900% : 26416.049us 00:07:45.904 99.99990% : 26416.049us 00:07:45.904 99.99999% : 26416.049us 00:07:45.904 00:07:45.904 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:45.904 ================================================================================= 00:07:45.904 1.00000% : 6351.951us 00:07:45.904 10.00000% : 7007.311us 00:07:45.904 25.00000% : 7158.548us 00:07:45.904 50.00000% : 7461.022us 00:07:45.904 75.00000% : 8166.794us 00:07:45.904 90.00000% : 9427.102us 00:07:45.904 95.00000% : 10636.997us 00:07:45.904 98.00000% : 12250.191us 00:07:45.904 99.00000% : 14518.745us 00:07:45.904 99.50000% : 19358.326us 00:07:45.904 99.90000% : 23996.258us 00:07:45.904 99.99000% : 24399.557us 00:07:45.904 99.99900% : 24399.557us 00:07:45.904 99.99990% : 24399.557us 00:07:45.904 99.99999% : 24399.557us 00:07:45.904 00:07:45.904 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:45.904 ================================================================================= 00:07:45.904 1.00000% : 6351.951us 00:07:45.904 10.00000% : 7007.311us 00:07:45.904 25.00000% : 7208.960us 00:07:45.904 50.00000% : 7461.022us 00:07:45.904 75.00000% : 8166.794us 00:07:45.904 90.00000% : 9477.514us 00:07:45.904 95.00000% : 10586.585us 00:07:45.904 98.00000% : 11998.129us 00:07:45.904 99.00000% : 13208.025us 00:07:45.904 99.50000% : 14821.218us 00:07:45.904 99.90000% : 18652.554us 00:07:45.904 99.99000% : 19055.852us 00:07:45.904 99.99900% : 19156.677us 00:07:45.904 99.99990% : 19156.677us 00:07:45.904 99.99999% : 19156.677us 00:07:45.904 00:07:45.904 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:45.904 ============================================================================== 00:07:45.904 Range in us Cumulative IO count 00:07:45.904 5671.385 - 5696.591: 0.0125% ( 2) 00:07:45.904 5696.591 - 5721.797: 0.0187% ( 1) 00:07:45.904 5721.797 - 5747.003: 0.0311% ( 2) 00:07:45.904 5747.003 - 5772.209: 0.0374% ( 1) 00:07:45.904 5772.209 - 5797.415: 0.0436% ( 1) 00:07:45.904 5797.415 - 5822.622: 0.0498% ( 1) 00:07:45.904 5822.622 - 5847.828: 0.0685% ( 3) 00:07:45.904 5847.828 - 5873.034: 0.0934% ( 4) 00:07:45.904 5873.034 - 5898.240: 0.1121% ( 3) 00:07:45.904 5898.240 - 5923.446: 0.1245% ( 2) 00:07:45.904 5923.446 - 5948.652: 0.1494% ( 4) 00:07:45.904 5948.652 - 5973.858: 0.1681% ( 3) 00:07:45.904 5973.858 - 5999.065: 0.2054% ( 6) 00:07:45.904 5999.065 - 6024.271: 0.2490% ( 7) 00:07:45.904 6024.271 - 6049.477: 0.2988% ( 8) 00:07:45.904 6049.477 - 6074.683: 0.3984% ( 16) 00:07:45.904 6074.683 - 6099.889: 0.4793% ( 13) 00:07:45.904 6099.889 - 6125.095: 0.6350% ( 25) 00:07:45.904 6125.095 - 6150.302: 0.7034% ( 11) 00:07:45.904 6150.302 - 6175.508: 0.7532% ( 8) 00:07:45.904 6175.508 - 6200.714: 0.7844% ( 5) 00:07:45.904 6200.714 - 6225.920: 0.8404% ( 9) 00:07:45.904 6225.920 - 6251.126: 0.8840% ( 7) 00:07:45.904 6251.126 - 6276.332: 0.9338% ( 8) 00:07:45.904 6276.332 - 6301.538: 0.9773% ( 7) 00:07:45.904 6301.538 - 6326.745: 1.0085% ( 5) 00:07:45.904 6326.745 - 6351.951: 1.0396% ( 5) 00:07:45.904 6351.951 - 6377.157: 1.0707% ( 5) 00:07:45.904 6377.157 - 6402.363: 1.1641% ( 15) 00:07:45.904 6402.363 - 6427.569: 1.2575% ( 15) 00:07:45.904 6427.569 - 6452.775: 1.3508% ( 15) 00:07:45.904 6452.775 - 6503.188: 1.5874% ( 38) 00:07:45.904 6503.188 - 6553.600: 1.8613% ( 44) 00:07:45.904 6553.600 - 6604.012: 2.1788% ( 51) 00:07:45.904 6604.012 - 6654.425: 2.8698% ( 111) 00:07:45.904 6654.425 - 6704.837: 3.7849% ( 147) 00:07:45.904 6704.837 - 6755.249: 5.3411% ( 250) 00:07:45.904 6755.249 - 6805.662: 6.9161% ( 253) 00:07:45.904 6805.662 - 6856.074: 9.5680% ( 426) 00:07:45.904 6856.074 - 6906.486: 12.0207% ( 394) 00:07:45.904 6906.486 - 6956.898: 15.1332% ( 500) 00:07:45.904 6956.898 - 7007.311: 19.0613% ( 631) 00:07:45.904 7007.311 - 7057.723: 22.4664% ( 547) 00:07:45.904 7057.723 - 7108.135: 25.9898% ( 566) 00:07:45.904 7108.135 - 7158.548: 29.4821% ( 561) 00:07:45.904 7158.548 - 7208.960: 32.8374% ( 539) 00:07:45.904 7208.960 - 7259.372: 35.9188% ( 495) 00:07:45.904 7259.372 - 7309.785: 39.1621% ( 521) 00:07:45.904 7309.785 - 7360.197: 42.5984% ( 552) 00:07:45.904 7360.197 - 7410.609: 45.5989% ( 482) 00:07:45.904 7410.609 - 7461.022: 48.8110% ( 516) 00:07:45.904 7461.022 - 7511.434: 51.8551% ( 489) 00:07:45.904 7511.434 - 7561.846: 54.6190% ( 444) 00:07:45.904 7561.846 - 7612.258: 57.3518% ( 439) 00:07:45.904 7612.258 - 7662.671: 59.9913% ( 424) 00:07:45.904 7662.671 - 7713.083: 62.4191% ( 390) 00:07:45.904 7713.083 - 7763.495: 64.5356% ( 340) 00:07:45.904 7763.495 - 7813.908: 66.5276% ( 320) 00:07:45.904 7813.908 - 7864.320: 68.3205% ( 288) 00:07:45.904 7864.320 - 7914.732: 69.9141% ( 256) 00:07:45.904 7914.732 - 7965.145: 71.1404% ( 197) 00:07:45.904 7965.145 - 8015.557: 72.3543% ( 195) 00:07:45.904 8015.557 - 8065.969: 73.3005% ( 152) 00:07:45.904 8065.969 - 8116.382: 74.3962% ( 176) 00:07:45.904 8116.382 - 8166.794: 75.2801% ( 142) 00:07:45.904 8166.794 - 8217.206: 76.1267% ( 136) 00:07:45.904 8217.206 - 8267.618: 76.9671% ( 135) 00:07:45.904 8267.618 - 8318.031: 77.6706% ( 113) 00:07:45.904 8318.031 - 8368.443: 78.3429% ( 108) 00:07:45.904 8368.443 - 8418.855: 78.9965% ( 105) 00:07:45.904 8418.855 - 8469.268: 79.5941% ( 96) 00:07:45.904 8469.268 - 8519.680: 80.2415% ( 104) 00:07:45.904 8519.680 - 8570.092: 80.9201% ( 109) 00:07:45.904 8570.092 - 8620.505: 81.5675% ( 104) 00:07:45.904 8620.505 - 8670.917: 82.3394% ( 124) 00:07:45.904 8670.917 - 8721.329: 82.8872% ( 88) 00:07:45.904 8721.329 - 8771.742: 83.5657% ( 109) 00:07:45.904 8771.742 - 8822.154: 84.1633% ( 96) 00:07:45.904 8822.154 - 8872.566: 84.7049% ( 87) 00:07:45.904 8872.566 - 8922.978: 85.2714% ( 91) 00:07:45.904 8922.978 - 8973.391: 85.8877% ( 99) 00:07:45.904 8973.391 - 9023.803: 86.4044% ( 83) 00:07:45.904 9023.803 - 9074.215: 86.8588% ( 73) 00:07:45.904 9074.215 - 9124.628: 87.4440% ( 94) 00:07:45.904 9124.628 - 9175.040: 87.8299% ( 62) 00:07:45.904 9175.040 - 9225.452: 88.2532% ( 68) 00:07:45.904 9225.452 - 9275.865: 88.6952% ( 71) 00:07:45.904 9275.865 - 9326.277: 89.0500% ( 57) 00:07:45.904 9326.277 - 9376.689: 89.4734% ( 68) 00:07:45.904 9376.689 - 9427.102: 89.9589% ( 78) 00:07:45.904 9427.102 - 9477.514: 90.2639% ( 49) 00:07:45.904 9477.514 - 9527.926: 90.6250% ( 58) 00:07:45.904 9527.926 - 9578.338: 90.9176% ( 47) 00:07:45.904 9578.338 - 9628.751: 91.2039% ( 46) 00:07:45.904 9628.751 - 9679.163: 91.4716% ( 43) 00:07:45.904 9679.163 - 9729.575: 91.7144% ( 39) 00:07:45.904 9729.575 - 9779.988: 91.9696% ( 41) 00:07:45.904 9779.988 - 9830.400: 92.2747% ( 49) 00:07:45.904 9830.400 - 9880.812: 92.4988% ( 36) 00:07:45.904 9880.812 - 9931.225: 92.6855% ( 30) 00:07:45.904 9931.225 - 9981.637: 92.8847% ( 32) 00:07:45.904 9981.637 - 10032.049: 93.0528% ( 27) 00:07:45.904 10032.049 - 10082.462: 93.2084% ( 25) 00:07:45.904 10082.462 - 10132.874: 93.3578% ( 24) 00:07:45.904 10132.874 - 10183.286: 93.5134% ( 25) 00:07:45.904 10183.286 - 10233.698: 93.7189% ( 33) 00:07:45.904 10233.698 - 10284.111: 93.9679% ( 40) 00:07:45.904 10284.111 - 10334.523: 94.1484% ( 29) 00:07:45.904 10334.523 - 10384.935: 94.3414% ( 31) 00:07:45.904 10384.935 - 10435.348: 94.5281% ( 30) 00:07:45.904 10435.348 - 10485.760: 94.6838% ( 25) 00:07:45.904 10485.760 - 10536.172: 94.8020% ( 19) 00:07:45.904 10536.172 - 10586.585: 94.9452% ( 23) 00:07:45.904 10586.585 - 10636.997: 95.0884% ( 23) 00:07:45.904 10636.997 - 10687.409: 95.2565% ( 27) 00:07:45.904 10687.409 - 10737.822: 95.3748% ( 19) 00:07:45.904 10737.822 - 10788.234: 95.4868% ( 18) 00:07:45.904 10788.234 - 10838.646: 95.6113% ( 20) 00:07:45.904 10838.646 - 10889.058: 95.6985% ( 14) 00:07:45.904 10889.058 - 10939.471: 95.7794% ( 13) 00:07:45.904 10939.471 - 10989.883: 95.8914% ( 18) 00:07:45.904 10989.883 - 11040.295: 96.0035% ( 18) 00:07:45.904 11040.295 - 11090.708: 96.1218% ( 19) 00:07:45.904 11090.708 - 11141.120: 96.2463% ( 20) 00:07:45.904 11141.120 - 11191.532: 96.3085% ( 10) 00:07:45.904 11191.532 - 11241.945: 96.3770% ( 11) 00:07:45.904 11241.945 - 11292.357: 96.4392% ( 10) 00:07:45.904 11292.357 - 11342.769: 96.5202% ( 13) 00:07:45.904 11342.769 - 11393.182: 96.6198% ( 16) 00:07:45.904 11393.182 - 11443.594: 96.7629% ( 23) 00:07:45.904 11443.594 - 11494.006: 96.8501% ( 14) 00:07:45.905 11494.006 - 11544.418: 96.9622% ( 18) 00:07:45.905 11544.418 - 11594.831: 97.0742% ( 18) 00:07:45.905 11594.831 - 11645.243: 97.1987% ( 20) 00:07:45.905 11645.243 - 11695.655: 97.3045% ( 17) 00:07:45.905 11695.655 - 11746.068: 97.4104% ( 17) 00:07:45.905 11746.068 - 11796.480: 97.4851% ( 12) 00:07:45.905 11796.480 - 11846.892: 97.5660% ( 13) 00:07:45.905 11846.892 - 11897.305: 97.6407% ( 12) 00:07:45.905 11897.305 - 11947.717: 97.7216% ( 13) 00:07:45.905 11947.717 - 11998.129: 97.7839% ( 10) 00:07:45.905 11998.129 - 12048.542: 97.8772% ( 15) 00:07:45.905 12048.542 - 12098.954: 97.9831% ( 17) 00:07:45.905 12098.954 - 12149.366: 98.1013% ( 19) 00:07:45.905 12149.366 - 12199.778: 98.2258% ( 20) 00:07:45.905 12199.778 - 12250.191: 98.3441% ( 19) 00:07:45.905 12250.191 - 12300.603: 98.4188% ( 12) 00:07:45.905 12300.603 - 12351.015: 98.4562% ( 6) 00:07:45.905 12351.015 - 12401.428: 98.5060% ( 8) 00:07:45.905 12401.428 - 12451.840: 98.5433% ( 6) 00:07:45.905 12451.840 - 12502.252: 98.5682% ( 4) 00:07:45.905 12502.252 - 12552.665: 98.5869% ( 3) 00:07:45.905 12552.665 - 12603.077: 98.6118% ( 4) 00:07:45.905 12603.077 - 12653.489: 98.6243% ( 2) 00:07:45.905 12653.489 - 12703.902: 98.6492% ( 4) 00:07:45.905 12703.902 - 12754.314: 98.6741% ( 4) 00:07:45.905 12754.314 - 12804.726: 98.6927% ( 3) 00:07:45.905 12804.726 - 12855.138: 98.7052% ( 2) 00:07:45.905 12905.551 - 13006.375: 98.7550% ( 8) 00:07:45.905 13006.375 - 13107.200: 98.8172% ( 10) 00:07:45.905 13107.200 - 13208.025: 98.8733% ( 9) 00:07:45.905 13208.025 - 13308.849: 98.9044% ( 5) 00:07:45.905 13308.849 - 13409.674: 98.9355% ( 5) 00:07:45.905 13409.674 - 13510.498: 98.9791% ( 7) 00:07:45.905 13510.498 - 13611.323: 98.9915% ( 2) 00:07:45.905 13611.323 - 13712.148: 99.0102% ( 3) 00:07:45.905 13712.148 - 13812.972: 99.0289% ( 3) 00:07:45.905 13812.972 - 13913.797: 99.0351% ( 1) 00:07:45.905 13913.797 - 14014.622: 99.0538% ( 3) 00:07:45.905 14014.622 - 14115.446: 99.0725% ( 3) 00:07:45.905 14115.446 - 14216.271: 99.0911% ( 3) 00:07:45.905 14216.271 - 14317.095: 99.1098% ( 3) 00:07:45.905 14317.095 - 14417.920: 99.1347% ( 4) 00:07:45.905 14417.920 - 14518.745: 99.1472% ( 2) 00:07:45.905 14518.745 - 14619.569: 99.1658% ( 3) 00:07:45.905 14619.569 - 14720.394: 99.1783% ( 2) 00:07:45.905 14720.394 - 14821.218: 99.2032% ( 4) 00:07:45.905 24601.206 - 24702.031: 99.2156% ( 2) 00:07:45.905 24702.031 - 24802.855: 99.2468% ( 5) 00:07:45.905 24802.855 - 24903.680: 99.2654% ( 3) 00:07:45.905 24903.680 - 25004.505: 99.3152% ( 8) 00:07:45.905 25004.505 - 25105.329: 99.3526% ( 6) 00:07:45.905 25105.329 - 25206.154: 99.4148% ( 10) 00:07:45.905 25206.154 - 25306.978: 99.4335% ( 3) 00:07:45.905 25306.978 - 25407.803: 99.4397% ( 1) 00:07:45.905 25407.803 - 25508.628: 99.4646% ( 4) 00:07:45.905 25508.628 - 25609.452: 99.4895% ( 4) 00:07:45.905 25609.452 - 25710.277: 99.5207% ( 5) 00:07:45.905 25811.102 - 26012.751: 99.5518% ( 5) 00:07:45.905 26012.751 - 26214.400: 99.6016% ( 8) 00:07:45.905 29642.437 - 29844.086: 99.6203% ( 3) 00:07:45.905 29844.086 - 30045.735: 99.6638% ( 7) 00:07:45.905 30045.735 - 30247.385: 99.7074% ( 7) 00:07:45.905 30247.385 - 30449.034: 99.7510% ( 7) 00:07:45.905 30449.034 - 30650.683: 99.7946% ( 7) 00:07:45.905 30650.683 - 30852.332: 99.8381% ( 7) 00:07:45.905 30852.332 - 31053.982: 99.8879% ( 8) 00:07:45.905 31053.982 - 31255.631: 99.9315% ( 7) 00:07:45.905 31255.631 - 31457.280: 99.9813% ( 8) 00:07:45.905 31457.280 - 31658.929: 100.0000% ( 3) 00:07:45.905 00:07:45.905 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:45.905 ============================================================================== 00:07:45.905 Range in us Cumulative IO count 00:07:45.905 5797.415 - 5822.622: 0.0062% ( 1) 00:07:45.905 5822.622 - 5847.828: 0.0187% ( 2) 00:07:45.905 5847.828 - 5873.034: 0.0374% ( 3) 00:07:45.905 5873.034 - 5898.240: 0.0436% ( 1) 00:07:45.905 5898.240 - 5923.446: 0.0685% ( 4) 00:07:45.905 5923.446 - 5948.652: 0.0996% ( 5) 00:07:45.905 5948.652 - 5973.858: 0.1245% ( 4) 00:07:45.905 5973.858 - 5999.065: 0.1619% ( 6) 00:07:45.905 5999.065 - 6024.271: 0.2988% ( 22) 00:07:45.905 6024.271 - 6049.477: 0.4109% ( 18) 00:07:45.905 6049.477 - 6074.683: 0.6287% ( 35) 00:07:45.905 6074.683 - 6099.889: 0.6661% ( 6) 00:07:45.905 6099.889 - 6125.095: 0.7034% ( 6) 00:07:45.905 6125.095 - 6150.302: 0.7283% ( 4) 00:07:45.905 6150.302 - 6175.508: 0.7532% ( 4) 00:07:45.905 6175.508 - 6200.714: 0.7719% ( 3) 00:07:45.905 6200.714 - 6225.920: 0.7906% ( 3) 00:07:45.905 6225.920 - 6251.126: 0.8030% ( 2) 00:07:45.905 6251.126 - 6276.332: 0.8093% ( 1) 00:07:45.905 6276.332 - 6301.538: 0.8217% ( 2) 00:07:45.905 6301.538 - 6326.745: 0.8528% ( 5) 00:07:45.905 6326.745 - 6351.951: 0.9026% ( 8) 00:07:45.905 6351.951 - 6377.157: 0.9400% ( 6) 00:07:45.905 6377.157 - 6402.363: 1.0022% ( 10) 00:07:45.905 6402.363 - 6427.569: 1.1454% ( 23) 00:07:45.905 6427.569 - 6452.775: 1.2388% ( 15) 00:07:45.905 6452.775 - 6503.188: 1.5314% ( 47) 00:07:45.905 6503.188 - 6553.600: 1.6808% ( 24) 00:07:45.905 6553.600 - 6604.012: 1.8115% ( 21) 00:07:45.905 6604.012 - 6654.425: 2.0294% ( 35) 00:07:45.905 6654.425 - 6704.837: 2.3718% ( 55) 00:07:45.905 6704.837 - 6755.249: 2.9196% ( 88) 00:07:45.905 6755.249 - 6805.662: 4.0463% ( 181) 00:07:45.905 6805.662 - 6856.074: 5.1295% ( 174) 00:07:45.905 6856.074 - 6906.486: 6.9534% ( 293) 00:07:45.905 6906.486 - 6956.898: 9.2754% ( 373) 00:07:45.905 6956.898 - 7007.311: 12.2323% ( 475) 00:07:45.905 7007.311 - 7057.723: 15.9612% ( 599) 00:07:45.905 7057.723 - 7108.135: 20.4557% ( 722) 00:07:45.905 7108.135 - 7158.548: 24.7261% ( 686) 00:07:45.905 7158.548 - 7208.960: 30.3785% ( 908) 00:07:45.905 7208.960 - 7259.372: 35.8379% ( 877) 00:07:45.905 7259.372 - 7309.785: 40.8553% ( 806) 00:07:45.905 7309.785 - 7360.197: 45.5740% ( 758) 00:07:45.905 7360.197 - 7410.609: 49.3775% ( 611) 00:07:45.905 7410.609 - 7461.022: 52.8635% ( 560) 00:07:45.905 7461.022 - 7511.434: 56.4430% ( 575) 00:07:45.905 7511.434 - 7561.846: 59.4995% ( 491) 00:07:45.905 7561.846 - 7612.258: 61.8588% ( 379) 00:07:45.905 7612.258 - 7662.671: 64.4734% ( 420) 00:07:45.905 7662.671 - 7713.083: 66.2849% ( 291) 00:07:45.905 7713.083 - 7763.495: 67.6606% ( 221) 00:07:45.905 7763.495 - 7813.908: 68.8807% ( 196) 00:07:45.905 7813.908 - 7864.320: 69.9141% ( 166) 00:07:45.905 7864.320 - 7914.732: 70.9475% ( 166) 00:07:45.905 7914.732 - 7965.145: 71.9435% ( 160) 00:07:45.905 7965.145 - 8015.557: 72.7839% ( 135) 00:07:45.905 8015.557 - 8065.969: 73.6554% ( 140) 00:07:45.905 8065.969 - 8116.382: 74.4646% ( 130) 00:07:45.905 8116.382 - 8166.794: 75.1930% ( 117) 00:07:45.905 8166.794 - 8217.206: 75.8840% ( 111) 00:07:45.905 8217.206 - 8267.618: 76.4380% ( 89) 00:07:45.905 8267.618 - 8318.031: 77.0045% ( 91) 00:07:45.905 8318.031 - 8368.443: 77.5772% ( 92) 00:07:45.905 8368.443 - 8418.855: 78.0876% ( 82) 00:07:45.905 8418.855 - 8469.268: 78.5608% ( 76) 00:07:45.905 8469.268 - 8519.680: 79.1335% ( 92) 00:07:45.905 8519.680 - 8570.092: 79.8618% ( 117) 00:07:45.905 8570.092 - 8620.505: 80.3411% ( 77) 00:07:45.905 8620.505 - 8670.917: 80.8889% ( 88) 00:07:45.905 8670.917 - 8721.329: 81.9036% ( 163) 00:07:45.905 8721.329 - 8771.742: 82.6631% ( 122) 00:07:45.905 8771.742 - 8822.154: 83.1736% ( 82) 00:07:45.905 8822.154 - 8872.566: 83.6965% ( 84) 00:07:45.905 8872.566 - 8922.978: 84.2567% ( 90) 00:07:45.905 8922.978 - 8973.391: 85.1158% ( 138) 00:07:45.905 8973.391 - 9023.803: 85.9624% ( 136) 00:07:45.905 9023.803 - 9074.215: 86.6845% ( 116) 00:07:45.905 9074.215 - 9124.628: 87.3630% ( 109) 00:07:45.905 9124.628 - 9175.040: 87.8922% ( 85) 00:07:45.905 9175.040 - 9225.452: 88.5085% ( 99) 00:07:45.905 9225.452 - 9275.865: 89.0500% ( 87) 00:07:45.905 9275.865 - 9326.277: 89.5730% ( 84) 00:07:45.905 9326.277 - 9376.689: 89.9963% ( 68) 00:07:45.905 9376.689 - 9427.102: 90.3137% ( 51) 00:07:45.905 9427.102 - 9477.514: 90.6437% ( 53) 00:07:45.905 9477.514 - 9527.926: 91.1541% ( 82) 00:07:45.905 9527.926 - 9578.338: 91.5090% ( 57) 00:07:45.905 9578.338 - 9628.751: 91.7704% ( 42) 00:07:45.905 9628.751 - 9679.163: 91.9572% ( 30) 00:07:45.905 9679.163 - 9729.575: 92.1315% ( 28) 00:07:45.905 9729.575 - 9779.988: 92.3494% ( 35) 00:07:45.905 9779.988 - 9830.400: 92.5050% ( 25) 00:07:45.905 9830.400 - 9880.812: 92.5984% ( 15) 00:07:45.905 9880.812 - 9931.225: 92.7478% ( 24) 00:07:45.905 9931.225 - 9981.637: 93.0715% ( 52) 00:07:45.905 9981.637 - 10032.049: 93.2831% ( 34) 00:07:45.905 10032.049 - 10082.462: 93.4885% ( 33) 00:07:45.905 10082.462 - 10132.874: 93.6877% ( 32) 00:07:45.905 10132.874 - 10183.286: 93.8185% ( 21) 00:07:45.905 10183.286 - 10233.698: 93.9368% ( 19) 00:07:45.905 10233.698 - 10284.111: 94.0924% ( 25) 00:07:45.905 10284.111 - 10334.523: 94.2729% ( 29) 00:07:45.905 10334.523 - 10384.935: 94.4161% ( 23) 00:07:45.905 10384.935 - 10435.348: 94.5904% ( 28) 00:07:45.905 10435.348 - 10485.760: 94.8020% ( 34) 00:07:45.905 10485.760 - 10536.172: 94.9888% ( 30) 00:07:45.905 10536.172 - 10586.585: 95.1071% ( 19) 00:07:45.905 10586.585 - 10636.997: 95.2253% ( 19) 00:07:45.905 10636.997 - 10687.409: 95.3312% ( 17) 00:07:45.905 10687.409 - 10737.822: 95.4370% ( 17) 00:07:45.905 10737.822 - 10788.234: 95.5304% ( 15) 00:07:45.905 10788.234 - 10838.646: 95.6362% ( 17) 00:07:45.905 10838.646 - 10889.058: 95.7234% ( 14) 00:07:45.905 10889.058 - 10939.471: 95.8603% ( 22) 00:07:45.905 10939.471 - 10989.883: 95.9910% ( 21) 00:07:45.905 10989.883 - 11040.295: 96.0720% ( 13) 00:07:45.905 11040.295 - 11090.708: 96.1342% ( 10) 00:07:45.905 11090.708 - 11141.120: 96.2463% ( 18) 00:07:45.905 11141.120 - 11191.532: 96.3210% ( 12) 00:07:45.906 11191.532 - 11241.945: 96.3708% ( 8) 00:07:45.906 11241.945 - 11292.357: 96.4330% ( 10) 00:07:45.906 11292.357 - 11342.769: 96.5077% ( 12) 00:07:45.906 11342.769 - 11393.182: 96.5824% ( 12) 00:07:45.906 11393.182 - 11443.594: 96.6696% ( 14) 00:07:45.906 11443.594 - 11494.006: 96.7629% ( 15) 00:07:45.906 11494.006 - 11544.418: 96.9435% ( 29) 00:07:45.906 11544.418 - 11594.831: 97.0493% ( 17) 00:07:45.906 11594.831 - 11645.243: 97.1551% ( 17) 00:07:45.906 11645.243 - 11695.655: 97.2174% ( 10) 00:07:45.906 11695.655 - 11746.068: 97.2921% ( 12) 00:07:45.906 11746.068 - 11796.480: 97.3730% ( 13) 00:07:45.906 11796.480 - 11846.892: 97.5162% ( 23) 00:07:45.906 11846.892 - 11897.305: 97.6345% ( 19) 00:07:45.906 11897.305 - 11947.717: 97.7403% ( 17) 00:07:45.906 11947.717 - 11998.129: 97.9519% ( 34) 00:07:45.906 11998.129 - 12048.542: 98.0578% ( 17) 00:07:45.906 12048.542 - 12098.954: 98.1698% ( 18) 00:07:45.906 12098.954 - 12149.366: 98.2570% ( 14) 00:07:45.906 12149.366 - 12199.778: 98.3379% ( 13) 00:07:45.906 12199.778 - 12250.191: 98.4001% ( 10) 00:07:45.906 12250.191 - 12300.603: 98.4686% ( 11) 00:07:45.906 12300.603 - 12351.015: 98.5184% ( 8) 00:07:45.906 12351.015 - 12401.428: 98.5433% ( 4) 00:07:45.906 12401.428 - 12451.840: 98.5931% ( 8) 00:07:45.906 12451.840 - 12502.252: 98.6803% ( 14) 00:07:45.906 12502.252 - 12552.665: 98.7363% ( 9) 00:07:45.906 12552.665 - 12603.077: 98.7799% ( 7) 00:07:45.906 12603.077 - 12653.489: 98.8110% ( 5) 00:07:45.906 12653.489 - 12703.902: 98.8297% ( 3) 00:07:45.906 12703.902 - 12754.314: 98.8733% ( 7) 00:07:45.906 12754.314 - 12804.726: 98.9044% ( 5) 00:07:45.906 12804.726 - 12855.138: 98.9480% ( 7) 00:07:45.906 12855.138 - 12905.551: 98.9978% ( 8) 00:07:45.906 12905.551 - 13006.375: 99.0538% ( 9) 00:07:45.906 13006.375 - 13107.200: 99.0911% ( 6) 00:07:45.906 13107.200 - 13208.025: 99.1223% ( 5) 00:07:45.906 13208.025 - 13308.849: 99.1534% ( 5) 00:07:45.906 13308.849 - 13409.674: 99.1721% ( 3) 00:07:45.906 13409.674 - 13510.498: 99.1907% ( 3) 00:07:45.906 13510.498 - 13611.323: 99.2032% ( 2) 00:07:45.906 22685.538 - 22786.363: 99.2281% ( 4) 00:07:45.906 22786.363 - 22887.188: 99.2530% ( 4) 00:07:45.906 22887.188 - 22988.012: 99.2779% ( 4) 00:07:45.906 22988.012 - 23088.837: 99.3028% ( 4) 00:07:45.906 23088.837 - 23189.662: 99.3277% ( 4) 00:07:45.906 23189.662 - 23290.486: 99.3526% ( 4) 00:07:45.906 23290.486 - 23391.311: 99.3775% ( 4) 00:07:45.906 23391.311 - 23492.135: 99.4024% ( 4) 00:07:45.906 23492.135 - 23592.960: 99.4335% ( 5) 00:07:45.906 23592.960 - 23693.785: 99.4584% ( 4) 00:07:45.906 23693.785 - 23794.609: 99.4833% ( 4) 00:07:45.906 23794.609 - 23895.434: 99.5082% ( 4) 00:07:45.906 23895.434 - 23996.258: 99.5331% ( 4) 00:07:45.906 23996.258 - 24097.083: 99.5518% ( 3) 00:07:45.906 24097.083 - 24197.908: 99.5705% ( 3) 00:07:45.906 24197.908 - 24298.732: 99.6016% ( 5) 00:07:45.906 27827.594 - 28029.243: 99.6078% ( 1) 00:07:45.906 28029.243 - 28230.892: 99.6576% ( 8) 00:07:45.906 28230.892 - 28432.542: 99.7074% ( 8) 00:07:45.906 28432.542 - 28634.191: 99.7572% ( 8) 00:07:45.906 28634.191 - 28835.840: 99.8070% ( 8) 00:07:45.906 28835.840 - 29037.489: 99.8568% ( 8) 00:07:45.906 29037.489 - 29239.138: 99.9004% ( 7) 00:07:45.906 29239.138 - 29440.788: 99.9502% ( 8) 00:07:45.906 29440.788 - 29642.437: 100.0000% ( 8) 00:07:45.906 00:07:45.906 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:45.906 ============================================================================== 00:07:45.906 Range in us Cumulative IO count 00:07:45.906 5595.766 - 5620.972: 0.0062% ( 1) 00:07:45.906 5696.591 - 5721.797: 0.0125% ( 1) 00:07:45.906 5772.209 - 5797.415: 0.0374% ( 4) 00:07:45.906 5797.415 - 5822.622: 0.0623% ( 4) 00:07:45.906 5822.622 - 5847.828: 0.0934% ( 5) 00:07:45.906 5847.828 - 5873.034: 0.1183% ( 4) 00:07:45.906 5873.034 - 5898.240: 0.1992% ( 13) 00:07:45.906 5898.240 - 5923.446: 0.2739% ( 12) 00:07:45.906 5923.446 - 5948.652: 0.2988% ( 4) 00:07:45.906 5948.652 - 5973.858: 0.3424% ( 7) 00:07:45.906 5973.858 - 5999.065: 0.3673% ( 4) 00:07:45.906 5999.065 - 6024.271: 0.4171% ( 8) 00:07:45.906 6024.271 - 6049.477: 0.4669% ( 8) 00:07:45.906 6049.477 - 6074.683: 0.5105% ( 7) 00:07:45.906 6074.683 - 6099.889: 0.5540% ( 7) 00:07:45.906 6099.889 - 6125.095: 0.6038% ( 8) 00:07:45.906 6125.095 - 6150.302: 0.7159% ( 18) 00:07:45.906 6150.302 - 6175.508: 0.7595% ( 7) 00:07:45.906 6175.508 - 6200.714: 0.7906% ( 5) 00:07:45.906 6200.714 - 6225.920: 0.8279% ( 6) 00:07:45.906 6225.920 - 6251.126: 0.8715% ( 7) 00:07:45.906 6251.126 - 6276.332: 0.9151% ( 7) 00:07:45.906 6276.332 - 6301.538: 0.9649% ( 8) 00:07:45.906 6301.538 - 6326.745: 1.0022% ( 6) 00:07:45.906 6326.745 - 6351.951: 1.1205% ( 19) 00:07:45.906 6351.951 - 6377.157: 1.1890% ( 11) 00:07:45.906 6377.157 - 6402.363: 1.2388% ( 8) 00:07:45.906 6402.363 - 6427.569: 1.4380% ( 32) 00:07:45.906 6427.569 - 6452.775: 1.5065% ( 11) 00:07:45.906 6452.775 - 6503.188: 1.6248% ( 19) 00:07:45.906 6503.188 - 6553.600: 1.8364% ( 34) 00:07:45.906 6553.600 - 6604.012: 2.2286% ( 63) 00:07:45.906 6604.012 - 6654.425: 2.5336% ( 49) 00:07:45.906 6654.425 - 6704.837: 2.9569% ( 68) 00:07:45.906 6704.837 - 6755.249: 3.5670% ( 98) 00:07:45.906 6755.249 - 6805.662: 4.5941% ( 165) 00:07:45.906 6805.662 - 6856.074: 5.7271% ( 182) 00:07:45.906 6856.074 - 6906.486: 7.7129% ( 319) 00:07:45.906 6906.486 - 6956.898: 10.1407% ( 390) 00:07:45.906 6956.898 - 7007.311: 13.1412% ( 482) 00:07:45.906 7007.311 - 7057.723: 17.2871% ( 666) 00:07:45.906 7057.723 - 7108.135: 21.5762% ( 689) 00:07:45.906 7108.135 - 7158.548: 25.9400% ( 701) 00:07:45.906 7158.548 - 7208.960: 30.1980% ( 684) 00:07:45.906 7208.960 - 7259.372: 35.1531% ( 796) 00:07:45.906 7259.372 - 7309.785: 39.6041% ( 715) 00:07:45.906 7309.785 - 7360.197: 44.5344% ( 792) 00:07:45.906 7360.197 - 7410.609: 49.0600% ( 727) 00:07:45.906 7410.609 - 7461.022: 53.0316% ( 638) 00:07:45.906 7461.022 - 7511.434: 56.6795% ( 586) 00:07:45.906 7511.434 - 7561.846: 59.9228% ( 521) 00:07:45.906 7561.846 - 7612.258: 63.2097% ( 528) 00:07:45.906 7612.258 - 7662.671: 65.5752% ( 380) 00:07:45.906 7662.671 - 7713.083: 67.5423% ( 316) 00:07:45.906 7713.083 - 7763.495: 68.9928% ( 233) 00:07:45.906 7763.495 - 7813.908: 70.3000% ( 210) 00:07:45.906 7813.908 - 7864.320: 71.2712% ( 156) 00:07:45.906 7864.320 - 7914.732: 72.1302% ( 138) 00:07:45.906 7914.732 - 7965.145: 72.9644% ( 134) 00:07:45.906 7965.145 - 8015.557: 73.7363% ( 124) 00:07:45.906 8015.557 - 8065.969: 74.3650% ( 101) 00:07:45.906 8065.969 - 8116.382: 74.9128% ( 88) 00:07:45.906 8116.382 - 8166.794: 75.3984% ( 78) 00:07:45.906 8166.794 - 8217.206: 75.8715% ( 76) 00:07:45.906 8217.206 - 8267.618: 76.4753% ( 97) 00:07:45.906 8267.618 - 8318.031: 77.3344% ( 138) 00:07:45.906 8318.031 - 8368.443: 77.9071% ( 92) 00:07:45.906 8368.443 - 8418.855: 78.3740% ( 75) 00:07:45.906 8418.855 - 8469.268: 78.8720% ( 80) 00:07:45.906 8469.268 - 8519.680: 79.4821% ( 98) 00:07:45.906 8519.680 - 8570.092: 80.0859% ( 97) 00:07:45.906 8570.092 - 8620.505: 80.7271% ( 103) 00:07:45.906 8620.505 - 8670.917: 81.5924% ( 139) 00:07:45.906 8670.917 - 8721.329: 82.3332% ( 119) 00:07:45.906 8721.329 - 8771.742: 82.9370% ( 97) 00:07:45.906 8771.742 - 8822.154: 83.4910% ( 89) 00:07:45.906 8822.154 - 8872.566: 84.0949% ( 97) 00:07:45.906 8872.566 - 8922.978: 84.7174% ( 100) 00:07:45.906 8922.978 - 8973.391: 85.2403% ( 84) 00:07:45.906 8973.391 - 9023.803: 85.7756% ( 86) 00:07:45.906 9023.803 - 9074.215: 86.3670% ( 95) 00:07:45.906 9074.215 - 9124.628: 87.1576% ( 127) 00:07:45.906 9124.628 - 9175.040: 87.9731% ( 131) 00:07:45.906 9175.040 - 9225.452: 88.5458% ( 92) 00:07:45.906 9225.452 - 9275.865: 88.9940% ( 72) 00:07:45.906 9275.865 - 9326.277: 89.5481% ( 89) 00:07:45.906 9326.277 - 9376.689: 90.0087% ( 74) 00:07:45.906 9376.689 - 9427.102: 90.3449% ( 54) 00:07:45.906 9427.102 - 9477.514: 90.6125% ( 43) 00:07:45.906 9477.514 - 9527.926: 90.9051% ( 47) 00:07:45.906 9527.926 - 9578.338: 91.1106% ( 33) 00:07:45.906 9578.338 - 9628.751: 91.2724% ( 26) 00:07:45.906 9628.751 - 9679.163: 91.4343% ( 26) 00:07:45.906 9679.163 - 9729.575: 91.5899% ( 25) 00:07:45.906 9729.575 - 9779.988: 91.7642% ( 28) 00:07:45.906 9779.988 - 9830.400: 91.9385% ( 28) 00:07:45.906 9830.400 - 9880.812: 92.0879% ( 24) 00:07:45.906 9880.812 - 9931.225: 92.2186% ( 21) 00:07:45.906 9931.225 - 9981.637: 92.3494% ( 21) 00:07:45.906 9981.637 - 10032.049: 92.4801% ( 21) 00:07:45.906 10032.049 - 10082.462: 92.5672% ( 14) 00:07:45.906 10082.462 - 10132.874: 92.6668% ( 16) 00:07:45.906 10132.874 - 10183.286: 92.9594% ( 47) 00:07:45.906 10183.286 - 10233.698: 93.2520% ( 47) 00:07:45.906 10233.698 - 10284.111: 93.5446% ( 47) 00:07:45.906 10284.111 - 10334.523: 93.8123% ( 43) 00:07:45.906 10334.523 - 10384.935: 94.0052% ( 31) 00:07:45.906 10384.935 - 10435.348: 94.1360% ( 21) 00:07:45.906 10435.348 - 10485.760: 94.2667% ( 21) 00:07:45.906 10485.760 - 10536.172: 94.3912% ( 20) 00:07:45.906 10536.172 - 10586.585: 94.6775% ( 46) 00:07:45.906 10586.585 - 10636.997: 94.8332% ( 25) 00:07:45.906 10636.997 - 10687.409: 95.0012% ( 27) 00:07:45.906 10687.409 - 10737.822: 95.2627% ( 42) 00:07:45.906 10737.822 - 10788.234: 95.4495% ( 30) 00:07:45.906 10788.234 - 10838.646: 95.6051% ( 25) 00:07:45.906 10838.646 - 10889.058: 95.7669% ( 26) 00:07:45.906 10889.058 - 10939.471: 95.8977% ( 21) 00:07:45.906 10939.471 - 10989.883: 96.0284% ( 21) 00:07:45.906 10989.883 - 11040.295: 96.1467% ( 19) 00:07:45.906 11040.295 - 11090.708: 96.2463% ( 16) 00:07:45.906 11090.708 - 11141.120: 96.3521% ( 17) 00:07:45.906 11141.120 - 11191.532: 96.4517% ( 16) 00:07:45.906 11191.532 - 11241.945: 96.5513% ( 16) 00:07:45.906 11241.945 - 11292.357: 96.6384% ( 14) 00:07:45.906 11292.357 - 11342.769: 96.7256% ( 14) 00:07:45.906 11342.769 - 11393.182: 96.8127% ( 14) 00:07:45.906 11393.182 - 11443.594: 96.8875% ( 12) 00:07:45.907 11443.594 - 11494.006: 96.9933% ( 17) 00:07:45.907 11494.006 - 11544.418: 97.0929% ( 16) 00:07:45.907 11544.418 - 11594.831: 97.1738% ( 13) 00:07:45.907 11594.831 - 11645.243: 97.2423% ( 11) 00:07:45.907 11645.243 - 11695.655: 97.3232% ( 13) 00:07:45.907 11695.655 - 11746.068: 97.3917% ( 11) 00:07:45.907 11746.068 - 11796.480: 97.4602% ( 11) 00:07:45.907 11796.480 - 11846.892: 97.5100% ( 8) 00:07:45.907 11846.892 - 11897.305: 97.5660% ( 9) 00:07:45.907 11897.305 - 11947.717: 97.6220% ( 9) 00:07:45.907 11947.717 - 11998.129: 97.6656% ( 7) 00:07:45.907 11998.129 - 12048.542: 97.7216% ( 9) 00:07:45.907 12048.542 - 12098.954: 97.7652% ( 7) 00:07:45.907 12098.954 - 12149.366: 97.8025% ( 6) 00:07:45.907 12149.366 - 12199.778: 97.8461% ( 7) 00:07:45.907 12199.778 - 12250.191: 97.8959% ( 8) 00:07:45.907 12250.191 - 12300.603: 97.9644% ( 11) 00:07:45.907 12300.603 - 12351.015: 98.0391% ( 12) 00:07:45.907 12351.015 - 12401.428: 98.1262% ( 14) 00:07:45.907 12401.428 - 12451.840: 98.1823% ( 9) 00:07:45.907 12451.840 - 12502.252: 98.2694% ( 14) 00:07:45.907 12502.252 - 12552.665: 98.3503% ( 13) 00:07:45.907 12552.665 - 12603.077: 98.5247% ( 28) 00:07:45.907 12603.077 - 12653.489: 98.6056% ( 13) 00:07:45.907 12653.489 - 12703.902: 98.6305% ( 4) 00:07:45.907 12703.902 - 12754.314: 98.6741% ( 7) 00:07:45.907 12754.314 - 12804.726: 98.7114% ( 6) 00:07:45.907 12804.726 - 12855.138: 98.7488% ( 6) 00:07:45.907 12855.138 - 12905.551: 98.7923% ( 7) 00:07:45.907 12905.551 - 13006.375: 98.8421% ( 8) 00:07:45.907 13006.375 - 13107.200: 98.9044% ( 10) 00:07:45.907 13107.200 - 13208.025: 99.0227% ( 19) 00:07:45.907 13208.025 - 13308.849: 99.0911% ( 11) 00:07:45.907 13308.849 - 13409.674: 99.1472% ( 9) 00:07:45.907 13409.674 - 13510.498: 99.1721% ( 4) 00:07:45.907 13510.498 - 13611.323: 99.1907% ( 3) 00:07:45.907 13611.323 - 13712.148: 99.2032% ( 2) 00:07:45.907 21677.292 - 21778.117: 99.2156% ( 2) 00:07:45.907 21778.117 - 21878.942: 99.2405% ( 4) 00:07:45.907 21878.942 - 21979.766: 99.2592% ( 3) 00:07:45.907 21979.766 - 22080.591: 99.2841% ( 4) 00:07:45.907 22080.591 - 22181.415: 99.3028% ( 3) 00:07:45.907 22181.415 - 22282.240: 99.3215% ( 3) 00:07:45.907 22282.240 - 22383.065: 99.3401% ( 3) 00:07:45.907 22383.065 - 22483.889: 99.3650% ( 4) 00:07:45.907 22483.889 - 22584.714: 99.3962% ( 5) 00:07:45.907 22584.714 - 22685.538: 99.4211% ( 4) 00:07:45.907 22685.538 - 22786.363: 99.4460% ( 4) 00:07:45.907 22786.363 - 22887.188: 99.4709% ( 4) 00:07:45.907 22887.188 - 22988.012: 99.4958% ( 4) 00:07:45.907 22988.012 - 23088.837: 99.5207% ( 4) 00:07:45.907 23088.837 - 23189.662: 99.5456% ( 4) 00:07:45.907 23189.662 - 23290.486: 99.5705% ( 4) 00:07:45.907 23290.486 - 23391.311: 99.6016% ( 5) 00:07:45.907 26416.049 - 26617.698: 99.6514% ( 8) 00:07:45.907 26617.698 - 26819.348: 99.7012% ( 8) 00:07:45.907 26819.348 - 27020.997: 99.7448% ( 7) 00:07:45.907 27020.997 - 27222.646: 99.7883% ( 7) 00:07:45.907 27222.646 - 27424.295: 99.8381% ( 8) 00:07:45.907 27424.295 - 27625.945: 99.8817% ( 7) 00:07:45.907 27625.945 - 27827.594: 99.9315% ( 8) 00:07:45.907 27827.594 - 28029.243: 99.9813% ( 8) 00:07:45.907 28029.243 - 28230.892: 100.0000% ( 3) 00:07:45.907 00:07:45.907 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:45.907 ============================================================================== 00:07:45.907 Range in us Cumulative IO count 00:07:45.907 5671.385 - 5696.591: 0.0062% ( 1) 00:07:45.907 5721.797 - 5747.003: 0.0125% ( 1) 00:07:45.907 5747.003 - 5772.209: 0.0187% ( 1) 00:07:45.907 5772.209 - 5797.415: 0.0498% ( 5) 00:07:45.907 5797.415 - 5822.622: 0.0872% ( 6) 00:07:45.907 5822.622 - 5847.828: 0.1370% ( 8) 00:07:45.907 5847.828 - 5873.034: 0.1930% ( 9) 00:07:45.907 5873.034 - 5898.240: 0.2428% ( 8) 00:07:45.907 5898.240 - 5923.446: 0.3362% ( 15) 00:07:45.907 5923.446 - 5948.652: 0.4295% ( 15) 00:07:45.907 5948.652 - 5973.858: 0.5416% ( 18) 00:07:45.907 5973.858 - 5999.065: 0.5976% ( 9) 00:07:45.907 5999.065 - 6024.271: 0.6412% ( 7) 00:07:45.907 6024.271 - 6049.477: 0.6848% ( 7) 00:07:45.907 6049.477 - 6074.683: 0.7283% ( 7) 00:07:45.907 6074.683 - 6099.889: 0.7657% ( 6) 00:07:45.907 6099.889 - 6125.095: 0.7844% ( 3) 00:07:45.907 6125.095 - 6150.302: 0.7906% ( 1) 00:07:45.907 6150.302 - 6175.508: 0.7968% ( 1) 00:07:45.907 6225.920 - 6251.126: 0.8030% ( 1) 00:07:45.907 6251.126 - 6276.332: 0.8217% ( 3) 00:07:45.907 6276.332 - 6301.538: 0.8466% ( 4) 00:07:45.907 6301.538 - 6326.745: 0.8653% ( 3) 00:07:45.907 6326.745 - 6351.951: 0.9026% ( 6) 00:07:45.907 6351.951 - 6377.157: 0.9587% ( 9) 00:07:45.907 6377.157 - 6402.363: 1.1143% ( 25) 00:07:45.907 6402.363 - 6427.569: 1.1765% ( 10) 00:07:45.907 6427.569 - 6452.775: 1.2512% ( 12) 00:07:45.907 6452.775 - 6503.188: 1.3571% ( 17) 00:07:45.907 6503.188 - 6553.600: 1.5376% ( 29) 00:07:45.907 6553.600 - 6604.012: 1.8364% ( 48) 00:07:45.907 6604.012 - 6654.425: 2.0730% ( 38) 00:07:45.907 6654.425 - 6704.837: 2.4900% ( 67) 00:07:45.907 6704.837 - 6755.249: 3.2806% ( 127) 00:07:45.907 6755.249 - 6805.662: 4.1272% ( 136) 00:07:45.907 6805.662 - 6856.074: 5.2415% ( 179) 00:07:45.907 6856.074 - 6906.486: 6.6858% ( 232) 00:07:45.907 6906.486 - 6956.898: 9.3563% ( 429) 00:07:45.907 6956.898 - 7007.311: 12.3630% ( 483) 00:07:45.907 7007.311 - 7057.723: 16.1790% ( 613) 00:07:45.907 7057.723 - 7108.135: 20.2067% ( 647) 00:07:45.907 7108.135 - 7158.548: 24.3401% ( 664) 00:07:45.907 7158.548 - 7208.960: 29.4198% ( 816) 00:07:45.907 7208.960 - 7259.372: 34.9290% ( 885) 00:07:45.907 7259.372 - 7309.785: 40.3013% ( 863) 00:07:45.907 7309.785 - 7360.197: 45.2814% ( 800) 00:07:45.907 7360.197 - 7410.609: 49.7012% ( 710) 00:07:45.907 7410.609 - 7461.022: 53.7351% ( 648) 00:07:45.907 7461.022 - 7511.434: 57.1340% ( 546) 00:07:45.907 7511.434 - 7561.846: 60.2403% ( 499) 00:07:45.907 7561.846 - 7612.258: 62.9980% ( 443) 00:07:45.907 7612.258 - 7662.671: 65.5876% ( 416) 00:07:45.907 7662.671 - 7713.083: 67.8100% ( 357) 00:07:45.907 7713.083 - 7763.495: 69.3476% ( 247) 00:07:45.907 7763.495 - 7813.908: 70.3810% ( 166) 00:07:45.907 7813.908 - 7864.320: 71.2400% ( 138) 00:07:45.907 7864.320 - 7914.732: 71.9933% ( 121) 00:07:45.907 7914.732 - 7965.145: 72.6967% ( 113) 00:07:45.907 7965.145 - 8015.557: 73.3068% ( 98) 00:07:45.907 8015.557 - 8065.969: 73.9355% ( 101) 00:07:45.907 8065.969 - 8116.382: 74.6016% ( 107) 00:07:45.907 8116.382 - 8166.794: 75.2739% ( 108) 00:07:45.907 8166.794 - 8217.206: 75.9462% ( 108) 00:07:45.907 8217.206 - 8267.618: 76.6372% ( 111) 00:07:45.907 8267.618 - 8318.031: 77.3593% ( 116) 00:07:45.907 8318.031 - 8368.443: 77.8760% ( 83) 00:07:45.907 8368.443 - 8418.855: 78.5047% ( 101) 00:07:45.907 8418.855 - 8469.268: 79.1023% ( 96) 00:07:45.907 8469.268 - 8519.680: 79.5941% ( 79) 00:07:45.907 8519.680 - 8570.092: 80.2042% ( 98) 00:07:45.907 8570.092 - 8620.505: 80.8703% ( 107) 00:07:45.907 8620.505 - 8670.917: 81.5115% ( 103) 00:07:45.907 8670.917 - 8721.329: 82.0966% ( 94) 00:07:45.907 8721.329 - 8771.742: 82.6008% ( 81) 00:07:45.907 8771.742 - 8822.154: 83.1985% ( 96) 00:07:45.907 8822.154 - 8872.566: 84.0326% ( 134) 00:07:45.907 8872.566 - 8922.978: 84.6302% ( 96) 00:07:45.907 8922.978 - 8973.391: 85.4457% ( 131) 00:07:45.907 8973.391 - 9023.803: 86.3857% ( 151) 00:07:45.907 9023.803 - 9074.215: 87.0954% ( 114) 00:07:45.907 9074.215 - 9124.628: 87.6743% ( 93) 00:07:45.907 9124.628 - 9175.040: 88.1599% ( 78) 00:07:45.907 9175.040 - 9225.452: 88.7637% ( 97) 00:07:45.907 9225.452 - 9275.865: 89.1123% ( 56) 00:07:45.907 9275.865 - 9326.277: 89.4360% ( 52) 00:07:45.907 9326.277 - 9376.689: 89.7099% ( 44) 00:07:45.907 9376.689 - 9427.102: 90.0025% ( 47) 00:07:45.907 9427.102 - 9477.514: 90.3573% ( 57) 00:07:45.907 9477.514 - 9527.926: 90.8553% ( 80) 00:07:45.907 9527.926 - 9578.338: 91.0857% ( 37) 00:07:45.907 9578.338 - 9628.751: 91.2849% ( 32) 00:07:45.907 9628.751 - 9679.163: 91.4716% ( 30) 00:07:45.907 9679.163 - 9729.575: 91.6210% ( 24) 00:07:45.907 9729.575 - 9779.988: 91.7455% ( 20) 00:07:45.907 9779.988 - 9830.400: 91.8389% ( 15) 00:07:45.907 9830.400 - 9880.812: 91.9447% ( 17) 00:07:45.907 9880.812 - 9931.225: 92.0692% ( 20) 00:07:45.907 9931.225 - 9981.637: 92.2498% ( 29) 00:07:45.907 9981.637 - 10032.049: 92.4178% ( 27) 00:07:45.907 10032.049 - 10082.462: 92.6108% ( 31) 00:07:45.907 10082.462 - 10132.874: 92.7166% ( 17) 00:07:45.907 10132.874 - 10183.286: 92.8474% ( 21) 00:07:45.907 10183.286 - 10233.698: 93.0030% ( 25) 00:07:45.907 10233.698 - 10284.111: 93.1835% ( 29) 00:07:45.907 10284.111 - 10334.523: 93.4014% ( 35) 00:07:45.907 10334.523 - 10384.935: 93.6379% ( 38) 00:07:45.907 10384.935 - 10435.348: 93.9617% ( 52) 00:07:45.907 10435.348 - 10485.760: 94.2231% ( 42) 00:07:45.907 10485.760 - 10536.172: 94.7149% ( 79) 00:07:45.907 10536.172 - 10586.585: 94.9888% ( 44) 00:07:45.907 10586.585 - 10636.997: 95.1942% ( 33) 00:07:45.907 10636.997 - 10687.409: 95.3872% ( 31) 00:07:45.907 10687.409 - 10737.822: 95.6238% ( 38) 00:07:45.907 10737.822 - 10788.234: 95.8105% ( 30) 00:07:45.907 10788.234 - 10838.646: 95.9475% ( 22) 00:07:45.907 10838.646 - 10889.058: 96.0906% ( 23) 00:07:45.907 10889.058 - 10939.471: 96.2151% ( 20) 00:07:45.907 10939.471 - 10989.883: 96.3645% ( 24) 00:07:45.907 10989.883 - 11040.295: 96.5326% ( 27) 00:07:45.907 11040.295 - 11090.708: 96.6945% ( 26) 00:07:45.907 11090.708 - 11141.120: 96.8127% ( 19) 00:07:45.907 11141.120 - 11191.532: 96.9373% ( 20) 00:07:45.907 11191.532 - 11241.945: 97.0306% ( 15) 00:07:45.907 11241.945 - 11292.357: 97.1302% ( 16) 00:07:45.908 11292.357 - 11342.769: 97.2174% ( 14) 00:07:45.908 11342.769 - 11393.182: 97.2921% ( 12) 00:07:45.908 11393.182 - 11443.594: 97.3606% ( 11) 00:07:45.908 11443.594 - 11494.006: 97.4290% ( 11) 00:07:45.908 11494.006 - 11544.418: 97.4788% ( 8) 00:07:45.908 11544.418 - 11594.831: 97.5224% ( 7) 00:07:45.908 11594.831 - 11645.243: 97.5722% ( 8) 00:07:45.908 11645.243 - 11695.655: 97.5971% ( 4) 00:07:45.908 11695.655 - 11746.068: 97.6531% ( 9) 00:07:45.908 11746.068 - 11796.480: 97.6967% ( 7) 00:07:45.908 11796.480 - 11846.892: 97.7465% ( 8) 00:07:45.908 11846.892 - 11897.305: 97.7776% ( 5) 00:07:45.908 11897.305 - 11947.717: 97.7901% ( 2) 00:07:45.908 11947.717 - 11998.129: 97.8025% ( 2) 00:07:45.908 11998.129 - 12048.542: 97.8212% ( 3) 00:07:45.908 12048.542 - 12098.954: 97.8337% ( 2) 00:07:45.908 12098.954 - 12149.366: 97.8523% ( 3) 00:07:45.908 12149.366 - 12199.778: 97.8835% ( 5) 00:07:45.908 12199.778 - 12250.191: 97.9084% ( 4) 00:07:45.908 12250.191 - 12300.603: 97.9395% ( 5) 00:07:45.908 12300.603 - 12351.015: 97.9582% ( 3) 00:07:45.908 12351.015 - 12401.428: 97.9893% ( 5) 00:07:45.908 12401.428 - 12451.840: 98.0142% ( 4) 00:07:45.908 12451.840 - 12502.252: 98.0391% ( 4) 00:07:45.908 12502.252 - 12552.665: 98.0951% ( 9) 00:07:45.908 12552.665 - 12603.077: 98.1574% ( 10) 00:07:45.908 12603.077 - 12653.489: 98.2196% ( 10) 00:07:45.908 12653.489 - 12703.902: 98.2943% ( 12) 00:07:45.908 12703.902 - 12754.314: 98.4686% ( 28) 00:07:45.908 12754.314 - 12804.726: 98.5060% ( 6) 00:07:45.908 12804.726 - 12855.138: 98.5496% ( 7) 00:07:45.908 12855.138 - 12905.551: 98.6056% ( 9) 00:07:45.908 12905.551 - 13006.375: 98.6616% ( 9) 00:07:45.908 13006.375 - 13107.200: 98.7612% ( 16) 00:07:45.908 13107.200 - 13208.025: 98.8048% ( 7) 00:07:45.908 13208.025 - 13308.849: 98.8359% ( 5) 00:07:45.908 13308.849 - 13409.674: 98.8919% ( 9) 00:07:45.908 13409.674 - 13510.498: 98.9168% ( 4) 00:07:45.908 13510.498 - 13611.323: 98.9480% ( 5) 00:07:45.908 13611.323 - 13712.148: 98.9853% ( 6) 00:07:45.908 13712.148 - 13812.972: 99.0662% ( 13) 00:07:45.908 13812.972 - 13913.797: 99.1409% ( 12) 00:07:45.908 13913.797 - 14014.622: 99.2032% ( 10) 00:07:45.908 19862.449 - 19963.274: 99.2156% ( 2) 00:07:45.908 19963.274 - 20064.098: 99.2405% ( 4) 00:07:45.908 20064.098 - 20164.923: 99.2654% ( 4) 00:07:45.908 20164.923 - 20265.748: 99.2903% ( 4) 00:07:45.908 20265.748 - 20366.572: 99.3090% ( 3) 00:07:45.908 20366.572 - 20467.397: 99.3339% ( 4) 00:07:45.908 20467.397 - 20568.222: 99.3588% ( 4) 00:07:46.167 20568.222 - 20669.046: 99.3837% ( 4) 00:07:46.167 20669.046 - 20769.871: 99.4086% ( 4) 00:07:46.167 20769.871 - 20870.695: 99.4335% ( 4) 00:07:46.167 20870.695 - 20971.520: 99.4584% ( 4) 00:07:46.167 20971.520 - 21072.345: 99.4833% ( 4) 00:07:46.167 21072.345 - 21173.169: 99.5082% ( 4) 00:07:46.167 21173.169 - 21273.994: 99.5331% ( 4) 00:07:46.167 21273.994 - 21374.818: 99.5580% ( 4) 00:07:46.167 21374.818 - 21475.643: 99.5829% ( 4) 00:07:46.167 21475.643 - 21576.468: 99.6016% ( 3) 00:07:46.167 24601.206 - 24702.031: 99.6265% ( 4) 00:07:46.167 24702.031 - 24802.855: 99.6514% ( 4) 00:07:46.167 24802.855 - 24903.680: 99.6763% ( 4) 00:07:46.167 24903.680 - 25004.505: 99.7012% ( 4) 00:07:46.167 25004.505 - 25105.329: 99.7261% ( 4) 00:07:46.167 25105.329 - 25206.154: 99.7510% ( 4) 00:07:46.167 25206.154 - 25306.978: 99.7697% ( 3) 00:07:46.167 25306.978 - 25407.803: 99.7946% ( 4) 00:07:46.167 25407.803 - 25508.628: 99.8195% ( 4) 00:07:46.167 25508.628 - 25609.452: 99.8444% ( 4) 00:07:46.167 25609.452 - 25710.277: 99.8630% ( 3) 00:07:46.167 25710.277 - 25811.102: 99.8879% ( 4) 00:07:46.167 25811.102 - 26012.751: 99.9377% ( 8) 00:07:46.167 26012.751 - 26214.400: 99.9938% ( 9) 00:07:46.167 26214.400 - 26416.049: 100.0000% ( 1) 00:07:46.167 00:07:46.167 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:46.167 ============================================================================== 00:07:46.167 Range in us Cumulative IO count 00:07:46.167 5822.622 - 5847.828: 0.0249% ( 4) 00:07:46.167 5847.828 - 5873.034: 0.0436% ( 3) 00:07:46.168 5873.034 - 5898.240: 0.0809% ( 6) 00:07:46.168 5898.240 - 5923.446: 0.1307% ( 8) 00:07:46.168 5923.446 - 5948.652: 0.1805% ( 8) 00:07:46.168 5948.652 - 5973.858: 0.3362% ( 25) 00:07:46.168 5973.858 - 5999.065: 0.3860% ( 8) 00:07:46.168 5999.065 - 6024.271: 0.4358% ( 8) 00:07:46.168 6024.271 - 6049.477: 0.6163% ( 29) 00:07:46.168 6049.477 - 6074.683: 0.6599% ( 7) 00:07:46.168 6074.683 - 6099.889: 0.7097% ( 8) 00:07:46.168 6099.889 - 6125.095: 0.7532% ( 7) 00:07:46.168 6125.095 - 6150.302: 0.7781% ( 4) 00:07:46.168 6150.302 - 6175.508: 0.7844% ( 1) 00:07:46.168 6175.508 - 6200.714: 0.8030% ( 3) 00:07:46.168 6200.714 - 6225.920: 0.8342% ( 5) 00:07:46.168 6225.920 - 6251.126: 0.8715% ( 6) 00:07:46.168 6251.126 - 6276.332: 0.9026% ( 5) 00:07:46.168 6276.332 - 6301.538: 0.9524% ( 8) 00:07:46.168 6301.538 - 6326.745: 0.9960% ( 7) 00:07:46.168 6326.745 - 6351.951: 1.0583% ( 10) 00:07:46.168 6351.951 - 6377.157: 1.1828% ( 20) 00:07:46.168 6377.157 - 6402.363: 1.3197% ( 22) 00:07:46.168 6402.363 - 6427.569: 1.3695% ( 8) 00:07:46.168 6427.569 - 6452.775: 1.4131% ( 7) 00:07:46.168 6452.775 - 6503.188: 1.5127% ( 16) 00:07:46.168 6503.188 - 6553.600: 1.6061% ( 15) 00:07:46.168 6553.600 - 6604.012: 1.7493% ( 23) 00:07:46.168 6604.012 - 6654.425: 1.9796% ( 37) 00:07:46.168 6654.425 - 6704.837: 2.3033% ( 52) 00:07:46.168 6704.837 - 6755.249: 2.9818% ( 109) 00:07:46.168 6755.249 - 6805.662: 3.7600% ( 125) 00:07:46.168 6805.662 - 6856.074: 4.8245% ( 171) 00:07:46.168 6856.074 - 6906.486: 6.6297% ( 290) 00:07:46.168 6906.486 - 6956.898: 8.9704% ( 376) 00:07:46.168 6956.898 - 7007.311: 11.8526% ( 463) 00:07:46.168 7007.311 - 7057.723: 15.9923% ( 665) 00:07:46.168 7057.723 - 7108.135: 20.3436% ( 699) 00:07:46.168 7108.135 - 7158.548: 25.0125% ( 750) 00:07:46.168 7158.548 - 7208.960: 29.9178% ( 788) 00:07:46.168 7208.960 - 7259.372: 35.3772% ( 877) 00:07:46.168 7259.372 - 7309.785: 40.6561% ( 848) 00:07:46.168 7309.785 - 7360.197: 45.4059% ( 763) 00:07:46.168 7360.197 - 7410.609: 49.9191% ( 725) 00:07:46.168 7410.609 - 7461.022: 53.3429% ( 550) 00:07:46.168 7461.022 - 7511.434: 56.8476% ( 563) 00:07:46.168 7511.434 - 7561.846: 59.9975% ( 506) 00:07:46.168 7561.846 - 7612.258: 62.7241% ( 438) 00:07:46.168 7612.258 - 7662.671: 65.3386% ( 420) 00:07:46.168 7662.671 - 7713.083: 67.2124% ( 301) 00:07:46.168 7713.083 - 7763.495: 68.9430% ( 278) 00:07:46.168 7763.495 - 7813.908: 70.1320% ( 191) 00:07:46.168 7813.908 - 7864.320: 71.0720% ( 151) 00:07:46.168 7864.320 - 7914.732: 71.9622% ( 143) 00:07:46.168 7914.732 - 7965.145: 72.6718% ( 114) 00:07:46.168 7965.145 - 8015.557: 73.4188% ( 120) 00:07:46.168 8015.557 - 8065.969: 74.1534% ( 118) 00:07:46.168 8065.969 - 8116.382: 74.9377% ( 126) 00:07:46.168 8116.382 - 8166.794: 75.4669% ( 85) 00:07:46.168 8166.794 - 8217.206: 76.2263% ( 122) 00:07:46.168 8217.206 - 8267.618: 76.6995% ( 76) 00:07:46.168 8267.618 - 8318.031: 77.1601% ( 74) 00:07:46.168 8318.031 - 8368.443: 77.7079% ( 88) 00:07:46.168 8368.443 - 8418.855: 78.2495% ( 87) 00:07:46.168 8418.855 - 8469.268: 78.7475% ( 80) 00:07:46.168 8469.268 - 8519.680: 79.5194% ( 124) 00:07:46.168 8519.680 - 8570.092: 80.2415% ( 116) 00:07:46.168 8570.092 - 8620.505: 80.8578% ( 99) 00:07:46.168 8620.505 - 8670.917: 81.9348% ( 173) 00:07:46.168 8670.917 - 8721.329: 82.6444% ( 114) 00:07:46.168 8721.329 - 8771.742: 83.2669% ( 100) 00:07:46.168 8771.742 - 8822.154: 84.0451% ( 125) 00:07:46.168 8822.154 - 8872.566: 84.8606% ( 131) 00:07:46.168 8872.566 - 8922.978: 85.6076% ( 120) 00:07:46.168 8922.978 - 8973.391: 86.0309% ( 68) 00:07:46.168 8973.391 - 9023.803: 86.4604% ( 69) 00:07:46.168 9023.803 - 9074.215: 86.9771% ( 83) 00:07:46.168 9074.215 - 9124.628: 87.6058% ( 101) 00:07:46.168 9124.628 - 9175.040: 88.1474% ( 87) 00:07:46.168 9175.040 - 9225.452: 88.5707% ( 68) 00:07:46.168 9225.452 - 9275.865: 89.0438% ( 76) 00:07:46.168 9275.865 - 9326.277: 89.3675% ( 52) 00:07:46.168 9326.277 - 9376.689: 89.6912% ( 52) 00:07:46.168 9376.689 - 9427.102: 90.0087% ( 51) 00:07:46.168 9427.102 - 9477.514: 90.2577% ( 40) 00:07:46.168 9477.514 - 9527.926: 90.4943% ( 38) 00:07:46.168 9527.926 - 9578.338: 90.8055% ( 50) 00:07:46.168 9578.338 - 9628.751: 91.0234% ( 35) 00:07:46.168 9628.751 - 9679.163: 91.1604% ( 22) 00:07:46.168 9679.163 - 9729.575: 91.3533% ( 31) 00:07:46.168 9729.575 - 9779.988: 91.5837% ( 37) 00:07:46.168 9779.988 - 9830.400: 91.8513% ( 43) 00:07:46.168 9830.400 - 9880.812: 92.1750% ( 52) 00:07:46.168 9880.812 - 9931.225: 92.3182% ( 23) 00:07:46.168 9931.225 - 9981.637: 92.5112% ( 31) 00:07:46.168 9981.637 - 10032.049: 92.6855% ( 28) 00:07:46.168 10032.049 - 10082.462: 92.9034% ( 35) 00:07:46.168 10082.462 - 10132.874: 93.1711% ( 43) 00:07:46.168 10132.874 - 10183.286: 93.3329% ( 26) 00:07:46.168 10183.286 - 10233.698: 93.4387% ( 17) 00:07:46.168 10233.698 - 10284.111: 93.6068% ( 27) 00:07:46.168 10284.111 - 10334.523: 93.7998% ( 31) 00:07:46.168 10334.523 - 10384.935: 94.0177% ( 35) 00:07:46.168 10384.935 - 10435.348: 94.2480% ( 37) 00:07:46.168 10435.348 - 10485.760: 94.4970% ( 40) 00:07:46.168 10485.760 - 10536.172: 94.7834% ( 46) 00:07:46.168 10536.172 - 10586.585: 94.9390% ( 25) 00:07:46.168 10586.585 - 10636.997: 95.1257% ( 30) 00:07:46.168 10636.997 - 10687.409: 95.2938% ( 27) 00:07:46.168 10687.409 - 10737.822: 95.4370% ( 23) 00:07:46.168 10737.822 - 10788.234: 95.5989% ( 26) 00:07:46.168 10788.234 - 10838.646: 95.7607% ( 26) 00:07:46.168 10838.646 - 10889.058: 95.8852% ( 20) 00:07:46.168 10889.058 - 10939.471: 96.0720% ( 30) 00:07:46.168 10939.471 - 10989.883: 96.2089% ( 22) 00:07:46.168 10989.883 - 11040.295: 96.3147% ( 17) 00:07:46.168 11040.295 - 11090.708: 96.4579% ( 23) 00:07:46.168 11090.708 - 11141.120: 96.5824% ( 20) 00:07:46.168 11141.120 - 11191.532: 96.7194% ( 22) 00:07:46.168 11191.532 - 11241.945: 96.8750% ( 25) 00:07:46.168 11241.945 - 11292.357: 97.0057% ( 21) 00:07:46.168 11292.357 - 11342.769: 97.1240% ( 19) 00:07:46.168 11342.769 - 11393.182: 97.2547% ( 21) 00:07:46.168 11393.182 - 11443.594: 97.3481% ( 15) 00:07:46.168 11443.594 - 11494.006: 97.4290% ( 13) 00:07:46.168 11494.006 - 11544.418: 97.4913% ( 10) 00:07:46.168 11544.418 - 11594.831: 97.5349% ( 7) 00:07:46.168 11594.831 - 11645.243: 97.5971% ( 10) 00:07:46.168 11645.243 - 11695.655: 97.6220% ( 4) 00:07:46.168 11695.655 - 11746.068: 97.6905% ( 11) 00:07:46.168 11746.068 - 11796.480: 97.7465% ( 9) 00:07:46.168 11796.480 - 11846.892: 97.8337% ( 14) 00:07:46.168 11846.892 - 11897.305: 97.9021% ( 11) 00:07:46.168 11897.305 - 11947.717: 97.9208% ( 3) 00:07:46.168 11947.717 - 11998.129: 97.9395% ( 3) 00:07:46.168 11998.129 - 12048.542: 97.9582% ( 3) 00:07:46.168 12048.542 - 12098.954: 97.9706% ( 2) 00:07:46.168 12098.954 - 12149.366: 97.9768% ( 1) 00:07:46.168 12149.366 - 12199.778: 97.9893% ( 2) 00:07:46.168 12199.778 - 12250.191: 98.0017% ( 2) 00:07:46.168 12250.191 - 12300.603: 98.0080% ( 1) 00:07:46.168 12351.015 - 12401.428: 98.0329% ( 4) 00:07:46.168 12401.428 - 12451.840: 98.1262% ( 15) 00:07:46.168 12451.840 - 12502.252: 98.1698% ( 7) 00:07:46.168 12502.252 - 12552.665: 98.1823% ( 2) 00:07:46.168 12552.665 - 12603.077: 98.2009% ( 3) 00:07:46.168 12603.077 - 12653.489: 98.2134% ( 2) 00:07:46.168 12653.489 - 12703.902: 98.2258% ( 2) 00:07:46.168 12703.902 - 12754.314: 98.2383% ( 2) 00:07:46.168 12754.314 - 12804.726: 98.2943% ( 9) 00:07:46.168 12804.726 - 12855.138: 98.3379% ( 7) 00:07:46.168 12855.138 - 12905.551: 98.3877% ( 8) 00:07:46.168 12905.551 - 13006.375: 98.5869% ( 32) 00:07:46.168 13006.375 - 13107.200: 98.6741% ( 14) 00:07:46.168 13107.200 - 13208.025: 98.7425% ( 11) 00:07:46.168 13208.025 - 13308.849: 98.7986% ( 9) 00:07:46.168 13308.849 - 13409.674: 98.8048% ( 1) 00:07:46.168 13510.498 - 13611.323: 98.8110% ( 1) 00:07:46.168 13812.972 - 13913.797: 98.8359% ( 4) 00:07:46.168 13913.797 - 14014.622: 98.8608% ( 4) 00:07:46.168 14014.622 - 14115.446: 98.8919% ( 5) 00:07:46.168 14115.446 - 14216.271: 98.9231% ( 5) 00:07:46.168 14216.271 - 14317.095: 98.9480% ( 4) 00:07:46.168 14317.095 - 14417.920: 98.9729% ( 4) 00:07:46.168 14417.920 - 14518.745: 99.0351% ( 10) 00:07:46.168 14518.745 - 14619.569: 99.1098% ( 12) 00:07:46.168 14619.569 - 14720.394: 99.1845% ( 12) 00:07:46.168 14720.394 - 14821.218: 99.2032% ( 3) 00:07:46.168 18047.606 - 18148.431: 99.2156% ( 2) 00:07:46.168 18148.431 - 18249.255: 99.2343% ( 3) 00:07:46.168 18249.255 - 18350.080: 99.2592% ( 4) 00:07:46.168 18350.080 - 18450.905: 99.2841% ( 4) 00:07:46.168 18450.905 - 18551.729: 99.3090% ( 4) 00:07:46.168 18551.729 - 18652.554: 99.3401% ( 5) 00:07:46.168 18652.554 - 18753.378: 99.3650% ( 4) 00:07:46.168 18753.378 - 18854.203: 99.3837% ( 3) 00:07:46.168 18854.203 - 18955.028: 99.4148% ( 5) 00:07:46.168 18955.028 - 19055.852: 99.4397% ( 4) 00:07:46.168 19055.852 - 19156.677: 99.4646% ( 4) 00:07:46.168 19156.677 - 19257.502: 99.4895% ( 4) 00:07:46.168 19257.502 - 19358.326: 99.5144% ( 4) 00:07:46.168 19358.326 - 19459.151: 99.5393% ( 4) 00:07:46.168 19459.151 - 19559.975: 99.5580% ( 3) 00:07:46.168 19559.975 - 19660.800: 99.5891% ( 5) 00:07:46.168 19660.800 - 19761.625: 99.6016% ( 2) 00:07:46.168 22685.538 - 22786.363: 99.6140% ( 2) 00:07:46.168 22786.363 - 22887.188: 99.6327% ( 3) 00:07:46.168 22887.188 - 22988.012: 99.6576% ( 4) 00:07:46.168 22988.012 - 23088.837: 99.6825% ( 4) 00:07:46.168 23088.837 - 23189.662: 99.7074% ( 4) 00:07:46.168 23189.662 - 23290.486: 99.7323% ( 4) 00:07:46.168 23290.486 - 23391.311: 99.7572% ( 4) 00:07:46.168 23391.311 - 23492.135: 99.7821% ( 4) 00:07:46.168 23492.135 - 23592.960: 99.8070% ( 4) 00:07:46.168 23592.960 - 23693.785: 99.8319% ( 4) 00:07:46.168 23693.785 - 23794.609: 99.8568% ( 4) 00:07:46.168 23794.609 - 23895.434: 99.8817% ( 4) 00:07:46.168 23895.434 - 23996.258: 99.9066% ( 4) 00:07:46.168 23996.258 - 24097.083: 99.9315% ( 4) 00:07:46.169 24097.083 - 24197.908: 99.9564% ( 4) 00:07:46.169 24197.908 - 24298.732: 99.9813% ( 4) 00:07:46.169 24298.732 - 24399.557: 100.0000% ( 3) 00:07:46.169 00:07:46.169 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:46.169 ============================================================================== 00:07:46.169 Range in us Cumulative IO count 00:07:46.169 5747.003 - 5772.209: 0.0062% ( 1) 00:07:46.169 5772.209 - 5797.415: 0.0124% ( 1) 00:07:46.169 5797.415 - 5822.622: 0.0186% ( 1) 00:07:46.169 5822.622 - 5847.828: 0.0248% ( 1) 00:07:46.169 5847.828 - 5873.034: 0.0310% ( 1) 00:07:46.169 5873.034 - 5898.240: 0.0434% ( 2) 00:07:46.169 5898.240 - 5923.446: 0.0806% ( 6) 00:07:46.169 5923.446 - 5948.652: 0.1364% ( 9) 00:07:46.169 5948.652 - 5973.858: 0.2728% ( 22) 00:07:46.169 5973.858 - 5999.065: 0.3906% ( 19) 00:07:46.169 5999.065 - 6024.271: 0.4340% ( 7) 00:07:46.169 6024.271 - 6049.477: 0.4836% ( 8) 00:07:46.169 6049.477 - 6074.683: 0.6820% ( 32) 00:07:46.169 6074.683 - 6099.889: 0.7130% ( 5) 00:07:46.169 6099.889 - 6125.095: 0.7378% ( 4) 00:07:46.169 6125.095 - 6150.302: 0.7688% ( 5) 00:07:46.169 6150.302 - 6175.508: 0.8061% ( 6) 00:07:46.169 6175.508 - 6200.714: 0.8247% ( 3) 00:07:46.169 6200.714 - 6225.920: 0.8495% ( 4) 00:07:46.169 6225.920 - 6251.126: 0.8805% ( 5) 00:07:46.169 6251.126 - 6276.332: 0.8991% ( 3) 00:07:46.169 6276.332 - 6301.538: 0.9177% ( 3) 00:07:46.169 6301.538 - 6326.745: 0.9673% ( 8) 00:07:46.169 6326.745 - 6351.951: 1.1037% ( 22) 00:07:46.169 6351.951 - 6377.157: 1.1471% ( 7) 00:07:46.169 6377.157 - 6402.363: 1.1781% ( 5) 00:07:46.169 6402.363 - 6427.569: 1.3145% ( 22) 00:07:46.169 6427.569 - 6452.775: 1.4571% ( 23) 00:07:46.169 6452.775 - 6503.188: 1.5377% ( 13) 00:07:46.169 6503.188 - 6553.600: 1.6245% ( 14) 00:07:46.169 6553.600 - 6604.012: 1.7423% ( 19) 00:07:46.169 6604.012 - 6654.425: 1.9221% ( 29) 00:07:46.169 6654.425 - 6704.837: 2.2259% ( 49) 00:07:46.169 6704.837 - 6755.249: 2.8398% ( 99) 00:07:46.169 6755.249 - 6805.662: 3.7326% ( 144) 00:07:46.169 6805.662 - 6856.074: 5.1091% ( 222) 00:07:46.169 6856.074 - 6906.486: 6.5910% ( 239) 00:07:46.169 6906.486 - 6956.898: 8.9906% ( 387) 00:07:46.169 6956.898 - 7007.311: 12.0474% ( 493) 00:07:46.169 7007.311 - 7057.723: 15.4638% ( 551) 00:07:46.169 7057.723 - 7108.135: 19.6553% ( 676) 00:07:46.169 7108.135 - 7158.548: 23.8219% ( 672) 00:07:46.169 7158.548 - 7208.960: 28.4226% ( 742) 00:07:46.169 7208.960 - 7259.372: 34.2138% ( 934) 00:07:46.169 7259.372 - 7309.785: 40.2964% ( 981) 00:07:46.169 7309.785 - 7360.197: 44.6925% ( 709) 00:07:46.169 7360.197 - 7410.609: 49.3118% ( 745) 00:07:46.169 7410.609 - 7461.022: 53.1622% ( 621) 00:07:46.169 7461.022 - 7511.434: 56.2748% ( 502) 00:07:46.169 7511.434 - 7561.846: 59.5362% ( 526) 00:07:46.169 7561.846 - 7612.258: 62.3388% ( 452) 00:07:46.169 7612.258 - 7662.671: 64.4035% ( 333) 00:07:46.169 7662.671 - 7713.083: 66.1892% ( 288) 00:07:46.169 7713.083 - 7763.495: 67.6401% ( 234) 00:07:46.169 7763.495 - 7813.908: 69.1158% ( 238) 00:07:46.169 7813.908 - 7864.320: 70.4055% ( 208) 00:07:46.169 7864.320 - 7914.732: 71.3790% ( 157) 00:07:46.169 7914.732 - 7965.145: 72.3896% ( 163) 00:07:46.169 7965.145 - 8015.557: 73.2825% ( 144) 00:07:46.169 8015.557 - 8065.969: 74.1195% ( 135) 00:07:46.169 8065.969 - 8116.382: 74.8760% ( 122) 00:07:46.169 8116.382 - 8166.794: 75.5084% ( 102) 00:07:46.169 8166.794 - 8217.206: 76.2463% ( 119) 00:07:46.169 8217.206 - 8267.618: 77.0027% ( 122) 00:07:46.169 8267.618 - 8318.031: 77.7220% ( 116) 00:07:46.169 8318.031 - 8368.443: 78.3048% ( 94) 00:07:46.169 8368.443 - 8418.855: 78.8256% ( 84) 00:07:46.169 8418.855 - 8469.268: 79.4829% ( 106) 00:07:46.169 8469.268 - 8519.680: 80.3013% ( 132) 00:07:46.169 8519.680 - 8570.092: 81.0020% ( 113) 00:07:46.169 8570.092 - 8620.505: 81.8452% ( 136) 00:07:46.169 8620.505 - 8670.917: 82.4901% ( 104) 00:07:46.169 8670.917 - 8721.329: 83.1287% ( 103) 00:07:46.169 8721.329 - 8771.742: 83.6372% ( 82) 00:07:46.169 8771.742 - 8822.154: 84.2200% ( 94) 00:07:46.169 8822.154 - 8872.566: 84.8338% ( 99) 00:07:46.169 8872.566 - 8922.978: 85.4539% ( 100) 00:07:46.169 8922.978 - 8973.391: 86.1049% ( 105) 00:07:46.169 8973.391 - 9023.803: 86.5947% ( 79) 00:07:46.169 9023.803 - 9074.215: 87.0660% ( 76) 00:07:46.169 9074.215 - 9124.628: 87.5496% ( 78) 00:07:46.169 9124.628 - 9175.040: 88.0208% ( 76) 00:07:46.169 9175.040 - 9225.452: 88.4115% ( 63) 00:07:46.169 9225.452 - 9275.865: 88.9447% ( 86) 00:07:46.169 9275.865 - 9326.277: 89.2485% ( 49) 00:07:46.169 9326.277 - 9376.689: 89.5151% ( 43) 00:07:46.169 9376.689 - 9427.102: 89.8375% ( 52) 00:07:46.169 9427.102 - 9477.514: 90.2716% ( 70) 00:07:46.169 9477.514 - 9527.926: 90.7118% ( 71) 00:07:46.169 9527.926 - 9578.338: 90.9970% ( 46) 00:07:46.169 9578.338 - 9628.751: 91.3008% ( 49) 00:07:46.169 9628.751 - 9679.163: 91.6915% ( 63) 00:07:46.169 9679.163 - 9729.575: 91.9333% ( 39) 00:07:46.169 9729.575 - 9779.988: 92.1999% ( 43) 00:07:46.169 9779.988 - 9830.400: 92.3983% ( 32) 00:07:46.169 9830.400 - 9880.812: 92.5595% ( 26) 00:07:46.169 9880.812 - 9931.225: 92.7827% ( 36) 00:07:46.169 9931.225 - 9981.637: 93.0184% ( 38) 00:07:46.169 9981.637 - 10032.049: 93.1796% ( 26) 00:07:46.169 10032.049 - 10082.462: 93.3470% ( 27) 00:07:46.169 10082.462 - 10132.874: 93.5330% ( 30) 00:07:46.169 10132.874 - 10183.286: 93.6942% ( 26) 00:07:46.169 10183.286 - 10233.698: 93.8244% ( 21) 00:07:46.169 10233.698 - 10284.111: 93.9670% ( 23) 00:07:46.169 10284.111 - 10334.523: 94.1344% ( 27) 00:07:46.169 10334.523 - 10384.935: 94.4382% ( 49) 00:07:46.169 10384.935 - 10435.348: 94.7235% ( 46) 00:07:46.169 10435.348 - 10485.760: 94.8723% ( 24) 00:07:46.169 10485.760 - 10536.172: 94.9901% ( 19) 00:07:46.169 10536.172 - 10586.585: 95.1141% ( 20) 00:07:46.169 10586.585 - 10636.997: 95.2257% ( 18) 00:07:46.169 10636.997 - 10687.409: 95.3435% ( 19) 00:07:46.169 10687.409 - 10737.822: 95.4923% ( 24) 00:07:46.169 10737.822 - 10788.234: 95.6287% ( 22) 00:07:46.169 10788.234 - 10838.646: 95.7341% ( 17) 00:07:46.169 10838.646 - 10889.058: 95.8953% ( 26) 00:07:46.169 10889.058 - 10939.471: 96.0689% ( 28) 00:07:46.169 10939.471 - 10989.883: 96.1992% ( 21) 00:07:46.169 10989.883 - 11040.295: 96.2984% ( 16) 00:07:46.169 11040.295 - 11090.708: 96.3790% ( 13) 00:07:46.169 11090.708 - 11141.120: 96.4844% ( 17) 00:07:46.169 11141.120 - 11191.532: 96.5650% ( 13) 00:07:46.169 11191.532 - 11241.945: 96.6580% ( 15) 00:07:46.169 11241.945 - 11292.357: 96.7200% ( 10) 00:07:46.169 11292.357 - 11342.769: 96.7944% ( 12) 00:07:46.169 11342.769 - 11393.182: 96.8626% ( 11) 00:07:46.169 11393.182 - 11443.594: 96.9184% ( 9) 00:07:46.169 11443.594 - 11494.006: 96.9804% ( 10) 00:07:46.169 11494.006 - 11544.418: 97.0796% ( 16) 00:07:46.169 11544.418 - 11594.831: 97.2036% ( 20) 00:07:46.169 11594.831 - 11645.243: 97.3090% ( 17) 00:07:46.169 11645.243 - 11695.655: 97.4206% ( 18) 00:07:46.169 11695.655 - 11746.068: 97.5508% ( 21) 00:07:46.169 11746.068 - 11796.480: 97.6376% ( 14) 00:07:46.169 11796.480 - 11846.892: 97.7617% ( 20) 00:07:46.169 11846.892 - 11897.305: 97.8485% ( 14) 00:07:46.169 11897.305 - 11947.717: 97.9539% ( 17) 00:07:46.169 11947.717 - 11998.129: 98.0469% ( 15) 00:07:46.169 11998.129 - 12048.542: 98.1771% ( 21) 00:07:46.169 12048.542 - 12098.954: 98.2081% ( 5) 00:07:46.169 12098.954 - 12149.366: 98.2329% ( 4) 00:07:46.169 12149.366 - 12199.778: 98.2453% ( 2) 00:07:46.169 12199.778 - 12250.191: 98.2701% ( 4) 00:07:46.169 12250.191 - 12300.603: 98.2949% ( 4) 00:07:46.169 12300.603 - 12351.015: 98.3197% ( 4) 00:07:46.169 12351.015 - 12401.428: 98.3383% ( 3) 00:07:46.169 12401.428 - 12451.840: 98.3631% ( 4) 00:07:46.169 12451.840 - 12502.252: 98.3879% ( 4) 00:07:46.169 12502.252 - 12552.665: 98.4065% ( 3) 00:07:46.169 12552.665 - 12603.077: 98.4375% ( 5) 00:07:46.169 12603.077 - 12653.489: 98.4561% ( 3) 00:07:46.169 12653.489 - 12703.902: 98.4747% ( 3) 00:07:46.169 12703.902 - 12754.314: 98.5243% ( 8) 00:07:46.169 12754.314 - 12804.726: 98.5739% ( 8) 00:07:46.169 12804.726 - 12855.138: 98.6235% ( 8) 00:07:46.169 12855.138 - 12905.551: 98.6731% ( 8) 00:07:46.169 12905.551 - 13006.375: 98.9025% ( 37) 00:07:46.169 13006.375 - 13107.200: 98.9707% ( 11) 00:07:46.169 13107.200 - 13208.025: 99.0389% ( 11) 00:07:46.169 13208.025 - 13308.849: 99.0823% ( 7) 00:07:46.169 13308.849 - 13409.674: 99.1071% ( 4) 00:07:46.169 13409.674 - 13510.498: 99.1319% ( 4) 00:07:46.169 13510.498 - 13611.323: 99.1567% ( 4) 00:07:46.169 13611.323 - 13712.148: 99.1815% ( 4) 00:07:46.169 13712.148 - 13812.972: 99.2063% ( 4) 00:07:46.169 14216.271 - 14317.095: 99.2188% ( 2) 00:07:46.169 14317.095 - 14417.920: 99.2932% ( 12) 00:07:46.169 14417.920 - 14518.745: 99.3614% ( 11) 00:07:46.169 14518.745 - 14619.569: 99.4358% ( 12) 00:07:46.169 14619.569 - 14720.394: 99.4916% ( 9) 00:07:46.169 14720.394 - 14821.218: 99.5040% ( 2) 00:07:46.169 14821.218 - 14922.043: 99.5164% ( 2) 00:07:46.169 14922.043 - 15022.868: 99.5350% ( 3) 00:07:46.169 15022.868 - 15123.692: 99.5536% ( 3) 00:07:46.169 15123.692 - 15224.517: 99.5784% ( 4) 00:07:46.169 15224.517 - 15325.342: 99.6032% ( 4) 00:07:46.169 17442.658 - 17543.483: 99.6280% ( 4) 00:07:46.169 17543.483 - 17644.308: 99.6528% ( 4) 00:07:46.169 17644.308 - 17745.132: 99.6714% ( 3) 00:07:46.169 17745.132 - 17845.957: 99.7024% ( 5) 00:07:46.169 17845.957 - 17946.782: 99.7272% ( 4) 00:07:46.169 17946.782 - 18047.606: 99.7520% ( 4) 00:07:46.169 18047.606 - 18148.431: 99.7768% ( 4) 00:07:46.169 18148.431 - 18249.255: 99.8016% ( 4) 00:07:46.169 18249.255 - 18350.080: 99.8264% ( 4) 00:07:46.169 18350.080 - 18450.905: 99.8512% ( 4) 00:07:46.169 18450.905 - 18551.729: 99.8760% ( 4) 00:07:46.169 18551.729 - 18652.554: 99.9008% ( 4) 00:07:46.170 18652.554 - 18753.378: 99.9194% ( 3) 00:07:46.170 18753.378 - 18854.203: 99.9442% ( 4) 00:07:46.170 18854.203 - 18955.028: 99.9690% ( 4) 00:07:46.170 18955.028 - 19055.852: 99.9938% ( 4) 00:07:46.170 19055.852 - 19156.677: 100.0000% ( 1) 00:07:46.170 00:07:46.170 11:22:31 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:46.170 00:07:46.170 real 0m2.525s 00:07:46.170 user 0m2.207s 00:07:46.170 sys 0m0.201s 00:07:46.170 11:22:31 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.170 11:22:31 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.170 ************************************ 00:07:46.170 END TEST nvme_perf 00:07:46.170 ************************************ 00:07:46.170 11:22:31 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:46.170 11:22:31 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:46.170 11:22:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.170 11:22:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.170 ************************************ 00:07:46.170 START TEST nvme_hello_world 00:07:46.170 ************************************ 00:07:46.170 11:22:31 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:46.170 Initializing NVMe Controllers 00:07:46.170 Attached to 0000:00:10.0 00:07:46.170 Namespace ID: 1 size: 6GB 00:07:46.170 Attached to 0000:00:11.0 00:07:46.170 Namespace ID: 1 size: 5GB 00:07:46.170 Attached to 0000:00:13.0 00:07:46.170 Namespace ID: 1 size: 1GB 00:07:46.170 Attached to 0000:00:12.0 00:07:46.170 Namespace ID: 1 size: 4GB 00:07:46.170 Namespace ID: 2 size: 4GB 00:07:46.170 Namespace ID: 3 size: 4GB 00:07:46.170 Initialization complete. 00:07:46.170 INFO: using host memory buffer for IO 00:07:46.170 Hello world! 00:07:46.170 INFO: using host memory buffer for IO 00:07:46.170 Hello world! 00:07:46.170 INFO: using host memory buffer for IO 00:07:46.170 Hello world! 00:07:46.170 INFO: using host memory buffer for IO 00:07:46.170 Hello world! 00:07:46.170 INFO: using host memory buffer for IO 00:07:46.170 Hello world! 00:07:46.170 INFO: using host memory buffer for IO 00:07:46.170 Hello world! 00:07:46.428 00:07:46.428 real 0m0.221s 00:07:46.428 user 0m0.084s 00:07:46.428 sys 0m0.094s 00:07:46.428 11:22:31 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.428 11:22:31 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:46.428 ************************************ 00:07:46.428 END TEST nvme_hello_world 00:07:46.428 ************************************ 00:07:46.428 11:22:31 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:46.428 11:22:31 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.428 11:22:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.428 11:22:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.428 ************************************ 00:07:46.428 START TEST nvme_sgl 00:07:46.428 ************************************ 00:07:46.428 11:22:31 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:46.428 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:46.428 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:46.428 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:46.687 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:46.687 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:46.687 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:46.687 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:46.687 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:46.687 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:46.687 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:46.687 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:46.687 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:46.687 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:46.687 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:46.687 NVMe Readv/Writev Request test 00:07:46.687 Attached to 0000:00:10.0 00:07:46.687 Attached to 0000:00:11.0 00:07:46.687 Attached to 0000:00:13.0 00:07:46.687 Attached to 0000:00:12.0 00:07:46.687 0000:00:10.0: build_io_request_2 test passed 00:07:46.687 0000:00:10.0: build_io_request_4 test passed 00:07:46.687 0000:00:10.0: build_io_request_5 test passed 00:07:46.687 0000:00:10.0: build_io_request_6 test passed 00:07:46.687 0000:00:10.0: build_io_request_7 test passed 00:07:46.687 0000:00:10.0: build_io_request_10 test passed 00:07:46.687 0000:00:11.0: build_io_request_2 test passed 00:07:46.687 0000:00:11.0: build_io_request_4 test passed 00:07:46.687 0000:00:11.0: build_io_request_5 test passed 00:07:46.687 0000:00:11.0: build_io_request_6 test passed 00:07:46.687 0000:00:11.0: build_io_request_7 test passed 00:07:46.687 0000:00:11.0: build_io_request_10 test passed 00:07:46.687 Cleaning up... 00:07:46.687 00:07:46.687 real 0m0.292s 00:07:46.687 user 0m0.148s 00:07:46.687 sys 0m0.100s 00:07:46.687 11:22:31 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.687 11:22:31 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:46.687 ************************************ 00:07:46.687 END TEST nvme_sgl 00:07:46.687 ************************************ 00:07:46.687 11:22:31 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:46.687 11:22:31 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.687 11:22:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.687 11:22:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.687 ************************************ 00:07:46.687 START TEST nvme_e2edp 00:07:46.687 ************************************ 00:07:46.687 11:22:31 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:46.946 NVMe Write/Read with End-to-End data protection test 00:07:46.946 Attached to 0000:00:10.0 00:07:46.946 Attached to 0000:00:11.0 00:07:46.946 Attached to 0000:00:13.0 00:07:46.946 Attached to 0000:00:12.0 00:07:46.946 Cleaning up... 00:07:46.946 00:07:46.946 real 0m0.199s 00:07:46.946 user 0m0.072s 00:07:46.946 sys 0m0.087s 00:07:46.946 11:22:32 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.946 11:22:32 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:46.946 ************************************ 00:07:46.946 END TEST nvme_e2edp 00:07:46.946 ************************************ 00:07:46.946 11:22:32 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:46.946 11:22:32 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.946 11:22:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.946 11:22:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.946 ************************************ 00:07:46.946 START TEST nvme_reserve 00:07:46.946 ************************************ 00:07:46.946 11:22:32 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:47.204 ===================================================== 00:07:47.204 NVMe Controller at PCI bus 0, device 16, function 0 00:07:47.204 ===================================================== 00:07:47.204 Reservations: Not Supported 00:07:47.204 ===================================================== 00:07:47.204 NVMe Controller at PCI bus 0, device 17, function 0 00:07:47.204 ===================================================== 00:07:47.204 Reservations: Not Supported 00:07:47.204 ===================================================== 00:07:47.204 NVMe Controller at PCI bus 0, device 19, function 0 00:07:47.204 ===================================================== 00:07:47.204 Reservations: Not Supported 00:07:47.204 ===================================================== 00:07:47.204 NVMe Controller at PCI bus 0, device 18, function 0 00:07:47.204 ===================================================== 00:07:47.204 Reservations: Not Supported 00:07:47.204 Reservation test passed 00:07:47.204 00:07:47.204 real 0m0.194s 00:07:47.204 user 0m0.064s 00:07:47.204 sys 0m0.097s 00:07:47.204 11:22:32 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.204 11:22:32 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:47.204 ************************************ 00:07:47.204 END TEST nvme_reserve 00:07:47.204 ************************************ 00:07:47.204 11:22:32 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:47.204 11:22:32 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.204 11:22:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.204 11:22:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.204 ************************************ 00:07:47.204 START TEST nvme_err_injection 00:07:47.204 ************************************ 00:07:47.204 11:22:32 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:47.463 NVMe Error Injection test 00:07:47.463 Attached to 0000:00:10.0 00:07:47.463 Attached to 0000:00:11.0 00:07:47.463 Attached to 0000:00:13.0 00:07:47.463 Attached to 0000:00:12.0 00:07:47.463 0000:00:10.0: get features failed as expected 00:07:47.463 0000:00:11.0: get features failed as expected 00:07:47.463 0000:00:13.0: get features failed as expected 00:07:47.463 0000:00:12.0: get features failed as expected 00:07:47.463 0000:00:10.0: get features successfully as expected 00:07:47.463 0000:00:11.0: get features successfully as expected 00:07:47.463 0000:00:13.0: get features successfully as expected 00:07:47.463 0000:00:12.0: get features successfully as expected 00:07:47.463 0000:00:10.0: read failed as expected 00:07:47.463 0000:00:11.0: read failed as expected 00:07:47.463 0000:00:13.0: read failed as expected 00:07:47.463 0000:00:12.0: read failed as expected 00:07:47.463 0000:00:10.0: read successfully as expected 00:07:47.463 0000:00:11.0: read successfully as expected 00:07:47.463 0000:00:13.0: read successfully as expected 00:07:47.463 0000:00:12.0: read successfully as expected 00:07:47.463 Cleaning up... 00:07:47.463 00:07:47.463 real 0m0.220s 00:07:47.463 user 0m0.078s 00:07:47.463 sys 0m0.099s 00:07:47.463 ************************************ 00:07:47.463 END TEST nvme_err_injection 00:07:47.463 ************************************ 00:07:47.463 11:22:32 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.463 11:22:32 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:47.463 11:22:32 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:47.463 11:22:32 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:47.463 11:22:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.463 11:22:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.463 ************************************ 00:07:47.463 START TEST nvme_overhead 00:07:47.463 ************************************ 00:07:47.463 11:22:32 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:48.839 Initializing NVMe Controllers 00:07:48.839 Attached to 0000:00:10.0 00:07:48.839 Attached to 0000:00:11.0 00:07:48.839 Attached to 0000:00:13.0 00:07:48.839 Attached to 0000:00:12.0 00:07:48.839 Initialization complete. Launching workers. 00:07:48.839 submit (in ns) avg, min, max = 11115.0, 9692.3, 60912.3 00:07:48.839 complete (in ns) avg, min, max = 7450.8, 7145.4, 49603.8 00:07:48.839 00:07:48.839 Submit histogram 00:07:48.839 ================ 00:07:48.839 Range in us Cumulative Count 00:07:48.839 9.649 - 9.698: 0.0054% ( 1) 00:07:48.839 9.748 - 9.797: 0.0109% ( 1) 00:07:48.839 9.895 - 9.945: 0.0163% ( 1) 00:07:48.839 10.092 - 10.142: 0.0218% ( 1) 00:07:48.839 10.142 - 10.191: 0.0272% ( 1) 00:07:48.839 10.191 - 10.240: 0.0327% ( 1) 00:07:48.839 10.634 - 10.683: 0.1471% ( 21) 00:07:48.839 10.683 - 10.732: 1.1989% ( 193) 00:07:48.839 10.732 - 10.782: 5.6894% ( 824) 00:07:48.839 10.782 - 10.831: 16.1090% ( 1912) 00:07:48.839 10.831 - 10.880: 32.8011% ( 3063) 00:07:48.839 10.880 - 10.929: 52.1417% ( 3549) 00:07:48.839 10.929 - 10.978: 68.8229% ( 3061) 00:07:48.839 10.978 - 11.028: 79.2044% ( 1905) 00:07:48.839 11.028 - 11.077: 84.8937% ( 1044) 00:07:48.839 11.077 - 11.126: 87.8965% ( 551) 00:07:48.839 11.126 - 11.175: 89.3406% ( 265) 00:07:48.839 11.175 - 11.225: 90.2452% ( 166) 00:07:48.839 11.225 - 11.274: 90.8447% ( 110) 00:07:48.839 11.274 - 11.323: 91.1826% ( 62) 00:07:48.839 11.323 - 11.372: 91.5204% ( 62) 00:07:48.839 11.372 - 11.422: 91.7875% ( 49) 00:07:48.839 11.422 - 11.471: 92.0599% ( 50) 00:07:48.839 11.471 - 11.520: 92.2452% ( 34) 00:07:48.839 11.520 - 11.569: 92.5177% ( 50) 00:07:48.839 11.569 - 11.618: 92.7357% ( 40) 00:07:48.839 11.618 - 11.668: 93.0191% ( 52) 00:07:48.839 11.668 - 11.717: 93.5640% ( 100) 00:07:48.839 11.717 - 11.766: 93.9728% ( 75) 00:07:48.839 11.766 - 11.815: 94.4905% ( 95) 00:07:48.839 11.815 - 11.865: 94.9264% ( 80) 00:07:48.839 11.865 - 11.914: 95.4278% ( 92) 00:07:48.839 11.914 - 11.963: 95.8692% ( 81) 00:07:48.839 11.963 - 12.012: 96.2343% ( 67) 00:07:48.839 12.012 - 12.062: 96.4360% ( 37) 00:07:48.839 12.062 - 12.111: 96.6485% ( 39) 00:07:48.839 12.111 - 12.160: 96.7902% ( 26) 00:07:48.839 12.160 - 12.209: 96.9101% ( 22) 00:07:48.839 12.209 - 12.258: 96.9428% ( 6) 00:07:48.839 12.258 - 12.308: 97.0136% ( 13) 00:07:48.839 12.308 - 12.357: 97.0627% ( 9) 00:07:48.839 12.357 - 12.406: 97.1063% ( 8) 00:07:48.839 12.406 - 12.455: 97.1335% ( 5) 00:07:48.839 12.455 - 12.505: 97.1499% ( 3) 00:07:48.839 12.505 - 12.554: 97.1880% ( 7) 00:07:48.839 12.554 - 12.603: 97.2153% ( 5) 00:07:48.839 12.603 - 12.702: 97.2316% ( 3) 00:07:48.839 12.702 - 12.800: 97.3079% ( 14) 00:07:48.839 12.800 - 12.898: 97.4496% ( 26) 00:07:48.839 12.898 - 12.997: 97.5804% ( 24) 00:07:48.839 12.997 - 13.095: 97.7166% ( 25) 00:07:48.839 13.095 - 13.194: 97.8474% ( 24) 00:07:48.839 13.194 - 13.292: 97.9346% ( 16) 00:07:48.839 13.292 - 13.391: 97.9946% ( 11) 00:07:48.839 13.391 - 13.489: 98.0381% ( 8) 00:07:48.839 13.489 - 13.588: 98.0708% ( 6) 00:07:48.839 13.588 - 13.686: 98.1090% ( 7) 00:07:48.839 13.686 - 13.785: 98.1253% ( 3) 00:07:48.839 13.785 - 13.883: 98.1308% ( 1) 00:07:48.839 13.883 - 13.982: 98.1362% ( 1) 00:07:48.839 13.982 - 14.080: 98.1580% ( 4) 00:07:48.839 14.080 - 14.178: 98.1689% ( 2) 00:07:48.839 14.178 - 14.277: 98.1744% ( 1) 00:07:48.839 14.277 - 14.375: 98.2071% ( 6) 00:07:48.839 14.375 - 14.474: 98.2125% ( 1) 00:07:48.839 14.474 - 14.572: 98.2507% ( 7) 00:07:48.839 14.572 - 14.671: 98.2670% ( 3) 00:07:48.839 14.671 - 14.769: 98.2997% ( 6) 00:07:48.839 14.769 - 14.868: 98.3324% ( 6) 00:07:48.839 14.868 - 14.966: 98.3760% ( 8) 00:07:48.839 14.966 - 15.065: 98.4142% ( 7) 00:07:48.839 15.065 - 15.163: 98.4469% ( 6) 00:07:48.839 15.163 - 15.262: 98.4632% ( 3) 00:07:48.839 15.262 - 15.360: 98.4687% ( 1) 00:07:48.839 15.360 - 15.458: 98.4905% ( 4) 00:07:48.839 15.458 - 15.557: 98.4959% ( 1) 00:07:48.839 15.557 - 15.655: 98.5068% ( 2) 00:07:48.839 15.655 - 15.754: 98.5177% ( 2) 00:07:48.839 15.754 - 15.852: 98.5286% ( 2) 00:07:48.839 15.852 - 15.951: 98.5450% ( 3) 00:07:48.839 15.951 - 16.049: 98.5722% ( 5) 00:07:48.839 16.049 - 16.148: 98.5940% ( 4) 00:07:48.839 16.148 - 16.246: 98.5995% ( 1) 00:07:48.839 16.246 - 16.345: 98.6104% ( 2) 00:07:48.839 16.345 - 16.443: 98.6866% ( 14) 00:07:48.839 16.443 - 16.542: 98.7575% ( 13) 00:07:48.839 16.542 - 16.640: 98.9264% ( 31) 00:07:48.839 16.640 - 16.738: 99.0354% ( 20) 00:07:48.839 16.738 - 16.837: 99.1281% ( 17) 00:07:48.839 16.837 - 16.935: 99.2153% ( 16) 00:07:48.839 16.935 - 17.034: 99.2643% ( 9) 00:07:48.839 17.034 - 17.132: 99.3406% ( 14) 00:07:48.839 17.132 - 17.231: 99.4223% ( 15) 00:07:48.839 17.231 - 17.329: 99.4823% ( 11) 00:07:48.839 17.329 - 17.428: 99.5531% ( 13) 00:07:48.839 17.428 - 17.526: 99.6076% ( 10) 00:07:48.839 17.526 - 17.625: 99.6349% ( 5) 00:07:48.839 17.625 - 17.723: 99.6621% ( 5) 00:07:48.839 17.723 - 17.822: 99.6785% ( 3) 00:07:48.839 17.822 - 17.920: 99.6948% ( 3) 00:07:48.839 17.920 - 18.018: 99.7166% ( 4) 00:07:48.839 18.018 - 18.117: 99.7275% ( 2) 00:07:48.839 18.117 - 18.215: 99.7330% ( 1) 00:07:48.839 18.215 - 18.314: 99.7548% ( 4) 00:07:48.839 18.314 - 18.412: 99.7602% ( 1) 00:07:48.839 18.412 - 18.511: 99.7657% ( 1) 00:07:48.839 18.511 - 18.609: 99.7711% ( 1) 00:07:48.839 18.609 - 18.708: 99.7820% ( 2) 00:07:48.839 18.806 - 18.905: 99.7875% ( 1) 00:07:48.839 18.905 - 19.003: 99.7929% ( 1) 00:07:48.839 19.003 - 19.102: 99.8038% ( 2) 00:07:48.839 19.200 - 19.298: 99.8093% ( 1) 00:07:48.839 19.298 - 19.397: 99.8202% ( 2) 00:07:48.839 19.594 - 19.692: 99.8256% ( 1) 00:07:48.839 19.692 - 19.791: 99.8365% ( 2) 00:07:48.839 19.791 - 19.889: 99.8420% ( 1) 00:07:48.839 19.988 - 20.086: 99.8474% ( 1) 00:07:48.839 20.185 - 20.283: 99.8583% ( 2) 00:07:48.839 20.283 - 20.382: 99.8692% ( 2) 00:07:48.839 20.382 - 20.480: 99.8801% ( 2) 00:07:48.839 20.578 - 20.677: 99.8856% ( 1) 00:07:48.839 20.677 - 20.775: 99.8910% ( 1) 00:07:48.839 21.169 - 21.268: 99.9019% ( 2) 00:07:48.839 21.268 - 21.366: 99.9074% ( 1) 00:07:48.839 22.351 - 22.449: 99.9128% ( 1) 00:07:48.840 22.449 - 22.548: 99.9183% ( 1) 00:07:48.840 23.040 - 23.138: 99.9237% ( 1) 00:07:48.840 23.335 - 23.434: 99.9292% ( 1) 00:07:48.840 23.434 - 23.532: 99.9346% ( 1) 00:07:48.840 24.222 - 24.320: 99.9401% ( 1) 00:07:48.840 24.320 - 24.418: 99.9455% ( 1) 00:07:48.840 24.714 - 24.812: 99.9510% ( 1) 00:07:48.840 26.388 - 26.585: 99.9619% ( 2) 00:07:48.840 29.735 - 29.932: 99.9673% ( 1) 00:07:48.840 30.523 - 30.720: 99.9728% ( 1) 00:07:48.840 36.628 - 36.825: 99.9782% ( 1) 00:07:48.840 44.898 - 45.095: 99.9837% ( 1) 00:07:48.840 46.080 - 46.277: 99.9891% ( 1) 00:07:48.840 59.077 - 59.471: 99.9946% ( 1) 00:07:48.840 60.652 - 61.046: 100.0000% ( 1) 00:07:48.840 00:07:48.840 Complete histogram 00:07:48.840 ================== 00:07:48.840 Range in us Cumulative Count 00:07:48.840 7.138 - 7.188: 0.2507% ( 46) 00:07:48.840 7.188 - 7.237: 3.7711% ( 646) 00:07:48.840 7.237 - 7.286: 17.1117% ( 2448) 00:07:48.840 7.286 - 7.335: 42.9809% ( 4747) 00:07:48.840 7.335 - 7.385: 68.0599% ( 4602) 00:07:48.840 7.385 - 7.434: 83.7493% ( 2879) 00:07:48.840 7.434 - 7.483: 91.4932% ( 1421) 00:07:48.840 7.483 - 7.532: 94.8392% ( 614) 00:07:48.840 7.532 - 7.582: 96.3542% ( 278) 00:07:48.840 7.582 - 7.631: 97.1989% ( 155) 00:07:48.840 7.631 - 7.680: 97.5150% ( 58) 00:07:48.840 7.680 - 7.729: 97.6785% ( 30) 00:07:48.840 7.729 - 7.778: 97.7766% ( 18) 00:07:48.840 7.778 - 7.828: 97.8365% ( 11) 00:07:48.840 7.828 - 7.877: 97.8801% ( 8) 00:07:48.840 7.877 - 7.926: 97.9183% ( 7) 00:07:48.840 7.926 - 7.975: 97.9673% ( 9) 00:07:48.840 7.975 - 8.025: 98.0109% ( 8) 00:07:48.840 8.025 - 8.074: 98.0545% ( 8) 00:07:48.840 8.074 - 8.123: 98.1526% ( 18) 00:07:48.840 8.123 - 8.172: 98.2452% ( 17) 00:07:48.840 8.172 - 8.222: 98.3597% ( 21) 00:07:48.840 8.222 - 8.271: 98.4469% ( 16) 00:07:48.840 8.271 - 8.320: 98.5286% ( 15) 00:07:48.840 8.320 - 8.369: 98.5722% ( 8) 00:07:48.840 8.369 - 8.418: 98.5777% ( 1) 00:07:48.840 8.418 - 8.468: 98.5831% ( 1) 00:07:48.840 8.468 - 8.517: 98.5940% ( 2) 00:07:48.840 8.517 - 8.566: 98.6104% ( 3) 00:07:48.840 8.566 - 8.615: 98.6158% ( 1) 00:07:48.840 8.763 - 8.812: 98.6213% ( 1) 00:07:48.840 8.960 - 9.009: 98.6267% ( 1) 00:07:48.840 9.108 - 9.157: 98.6322% ( 1) 00:07:48.840 9.157 - 9.206: 98.6431% ( 2) 00:07:48.840 9.305 - 9.354: 98.6485% ( 1) 00:07:48.840 9.354 - 9.403: 98.6540% ( 1) 00:07:48.840 9.403 - 9.452: 98.6594% ( 1) 00:07:48.840 9.551 - 9.600: 98.6649% ( 1) 00:07:48.840 9.698 - 9.748: 98.6703% ( 1) 00:07:48.840 9.846 - 9.895: 98.6757% ( 1) 00:07:48.840 9.895 - 9.945: 98.6866% ( 2) 00:07:48.840 9.994 - 10.043: 98.6975% ( 2) 00:07:48.840 10.043 - 10.092: 98.7030% ( 1) 00:07:48.840 10.092 - 10.142: 98.7084% ( 1) 00:07:48.840 10.338 - 10.388: 98.7139% ( 1) 00:07:48.840 10.388 - 10.437: 98.7302% ( 3) 00:07:48.840 10.437 - 10.486: 98.7357% ( 1) 00:07:48.840 10.585 - 10.634: 98.7466% ( 2) 00:07:48.840 10.634 - 10.683: 98.7520% ( 1) 00:07:48.840 10.929 - 10.978: 98.7575% ( 1) 00:07:48.840 11.422 - 11.471: 98.7629% ( 1) 00:07:48.840 11.520 - 11.569: 98.7684% ( 1) 00:07:48.840 11.668 - 11.717: 98.7738% ( 1) 00:07:48.840 11.914 - 11.963: 98.7847% ( 2) 00:07:48.840 12.012 - 12.062: 98.7902% ( 1) 00:07:48.840 12.062 - 12.111: 98.7956% ( 1) 00:07:48.840 12.111 - 12.160: 98.8011% ( 1) 00:07:48.840 12.357 - 12.406: 98.8065% ( 1) 00:07:48.840 12.505 - 12.554: 98.8120% ( 1) 00:07:48.840 12.603 - 12.702: 98.8229% ( 2) 00:07:48.840 12.702 - 12.800: 98.8338% ( 2) 00:07:48.840 12.800 - 12.898: 98.8828% ( 9) 00:07:48.840 12.898 - 12.997: 98.9700% ( 16) 00:07:48.840 12.997 - 13.095: 99.0191% ( 9) 00:07:48.840 13.095 - 13.194: 99.0790% ( 11) 00:07:48.840 13.194 - 13.292: 99.1444% ( 12) 00:07:48.840 13.292 - 13.391: 99.2207% ( 14) 00:07:48.840 13.391 - 13.489: 99.2916% ( 13) 00:07:48.840 13.489 - 13.588: 99.3787% ( 16) 00:07:48.840 13.588 - 13.686: 99.5095% ( 24) 00:07:48.840 13.686 - 13.785: 99.5477% ( 7) 00:07:48.840 13.785 - 13.883: 99.6294% ( 15) 00:07:48.840 13.883 - 13.982: 99.6785% ( 9) 00:07:48.840 13.982 - 14.080: 99.7057% ( 5) 00:07:48.840 14.080 - 14.178: 99.7221% ( 3) 00:07:48.840 14.178 - 14.277: 99.7493% ( 5) 00:07:48.840 14.277 - 14.375: 99.7875% ( 7) 00:07:48.840 14.375 - 14.474: 99.8093% ( 4) 00:07:48.840 14.474 - 14.572: 99.8256% ( 3) 00:07:48.840 14.572 - 14.671: 99.8420% ( 3) 00:07:48.840 14.671 - 14.769: 99.8529% ( 2) 00:07:48.840 14.966 - 15.065: 99.8583% ( 1) 00:07:48.840 15.065 - 15.163: 99.8638% ( 1) 00:07:48.840 15.360 - 15.458: 99.8692% ( 1) 00:07:48.840 15.754 - 15.852: 99.8747% ( 1) 00:07:48.840 15.852 - 15.951: 99.8801% ( 1) 00:07:48.840 16.542 - 16.640: 99.8965% ( 3) 00:07:48.840 16.738 - 16.837: 99.9074% ( 2) 00:07:48.840 16.935 - 17.034: 99.9128% ( 1) 00:07:48.840 17.034 - 17.132: 99.9183% ( 1) 00:07:48.840 17.132 - 17.231: 99.9237% ( 1) 00:07:48.840 17.329 - 17.428: 99.9292% ( 1) 00:07:48.840 17.428 - 17.526: 99.9346% ( 1) 00:07:48.840 17.723 - 17.822: 99.9401% ( 1) 00:07:48.840 18.018 - 18.117: 99.9455% ( 1) 00:07:48.840 18.511 - 18.609: 99.9510% ( 1) 00:07:48.840 18.609 - 18.708: 99.9564% ( 1) 00:07:48.840 19.692 - 19.791: 99.9619% ( 1) 00:07:48.840 20.185 - 20.283: 99.9673% ( 1) 00:07:48.840 20.972 - 21.071: 99.9728% ( 1) 00:07:48.840 21.366 - 21.465: 99.9782% ( 1) 00:07:48.840 22.055 - 22.154: 99.9837% ( 1) 00:07:48.840 22.745 - 22.843: 99.9891% ( 1) 00:07:48.840 45.292 - 45.489: 99.9946% ( 1) 00:07:48.840 49.428 - 49.625: 100.0000% ( 1) 00:07:48.840 00:07:48.840 ************************************ 00:07:48.840 END TEST nvme_overhead 00:07:48.840 ************************************ 00:07:48.840 00:07:48.840 real 0m1.220s 00:07:48.840 user 0m1.069s 00:07:48.840 sys 0m0.100s 00:07:48.840 11:22:33 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.840 11:22:33 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:48.840 11:22:33 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:48.840 11:22:33 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:48.840 11:22:33 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.840 11:22:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.840 ************************************ 00:07:48.840 START TEST nvme_arbitration 00:07:48.840 ************************************ 00:07:48.840 11:22:33 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:52.129 Initializing NVMe Controllers 00:07:52.129 Attached to 0000:00:10.0 00:07:52.129 Attached to 0000:00:11.0 00:07:52.129 Attached to 0000:00:13.0 00:07:52.129 Attached to 0000:00:12.0 00:07:52.129 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:52.129 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:52.129 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:52.129 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:52.129 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:52.129 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:52.129 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:52.129 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:52.129 Initialization complete. Launching workers. 00:07:52.129 Starting thread on core 1 with urgent priority queue 00:07:52.129 Starting thread on core 2 with urgent priority queue 00:07:52.129 Starting thread on core 3 with urgent priority queue 00:07:52.129 Starting thread on core 0 with urgent priority queue 00:07:52.129 QEMU NVMe Ctrl (12340 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:52.129 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:52.129 QEMU NVMe Ctrl (12341 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:07:52.129 QEMU NVMe Ctrl (12342 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:07:52.129 QEMU NVMe Ctrl (12343 ) core 2: 896.00 IO/s 111.61 secs/100000 ios 00:07:52.129 QEMU NVMe Ctrl (12342 ) core 3: 853.33 IO/s 117.19 secs/100000 ios 00:07:52.129 ======================================================== 00:07:52.129 00:07:52.129 ************************************ 00:07:52.129 END TEST nvme_arbitration 00:07:52.129 ************************************ 00:07:52.129 00:07:52.129 real 0m3.294s 00:07:52.129 user 0m9.243s 00:07:52.129 sys 0m0.101s 00:07:52.129 11:22:37 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.129 11:22:37 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:52.129 11:22:37 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:52.129 11:22:37 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:52.129 11:22:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.129 11:22:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.129 ************************************ 00:07:52.129 START TEST nvme_single_aen 00:07:52.129 ************************************ 00:07:52.129 11:22:37 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:52.129 Asynchronous Event Request test 00:07:52.129 Attached to 0000:00:10.0 00:07:52.129 Attached to 0000:00:11.0 00:07:52.129 Attached to 0000:00:13.0 00:07:52.129 Attached to 0000:00:12.0 00:07:52.129 Reset controller to setup AER completions for this process 00:07:52.129 Registering asynchronous event callbacks... 00:07:52.129 Getting orig temperature thresholds of all controllers 00:07:52.129 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:52.129 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:52.129 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:52.129 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:52.129 Setting all controllers temperature threshold low to trigger AER 00:07:52.129 Waiting for all controllers temperature threshold to be set lower 00:07:52.129 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:52.129 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:52.129 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:52.129 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:52.129 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:52.129 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:52.129 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:52.129 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:52.129 Waiting for all controllers to trigger AER and reset threshold 00:07:52.129 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.129 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.129 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.129 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:52.129 Cleaning up... 00:07:52.387 ************************************ 00:07:52.387 END TEST nvme_single_aen 00:07:52.387 ************************************ 00:07:52.387 00:07:52.387 real 0m0.242s 00:07:52.387 user 0m0.082s 00:07:52.387 sys 0m0.101s 00:07:52.387 11:22:37 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.387 11:22:37 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:52.387 11:22:37 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:52.387 11:22:37 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.387 11:22:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.387 11:22:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.387 ************************************ 00:07:52.387 START TEST nvme_doorbell_aers 00:07:52.387 ************************************ 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1494 -- # bdfs=() 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1494 -- # local bdfs 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:52.387 11:22:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:52.646 [2024-10-27 11:22:37.742899] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:02.626 Executing: test_write_invalid_db 00:08:02.626 Waiting for AER completion... 00:08:02.626 Failure: test_write_invalid_db 00:08:02.626 00:08:02.626 Executing: test_invalid_db_write_overflow_sq 00:08:02.626 Waiting for AER completion... 00:08:02.626 Failure: test_invalid_db_write_overflow_sq 00:08:02.626 00:08:02.626 Executing: test_invalid_db_write_overflow_cq 00:08:02.626 Waiting for AER completion... 00:08:02.626 Failure: test_invalid_db_write_overflow_cq 00:08:02.626 00:08:02.626 11:22:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:02.626 11:22:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:02.626 [2024-10-27 11:22:47.788538] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:12.626 Executing: test_write_invalid_db 00:08:12.626 Waiting for AER completion... 00:08:12.626 Failure: test_write_invalid_db 00:08:12.626 00:08:12.626 Executing: test_invalid_db_write_overflow_sq 00:08:12.626 Waiting for AER completion... 00:08:12.626 Failure: test_invalid_db_write_overflow_sq 00:08:12.626 00:08:12.626 Executing: test_invalid_db_write_overflow_cq 00:08:12.626 Waiting for AER completion... 00:08:12.626 Failure: test_invalid_db_write_overflow_cq 00:08:12.626 00:08:12.626 11:22:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:12.626 11:22:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:12.626 [2024-10-27 11:22:57.812222] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:22.606 Executing: test_write_invalid_db 00:08:22.606 Waiting for AER completion... 00:08:22.606 Failure: test_write_invalid_db 00:08:22.606 00:08:22.606 Executing: test_invalid_db_write_overflow_sq 00:08:22.606 Waiting for AER completion... 00:08:22.606 Failure: test_invalid_db_write_overflow_sq 00:08:22.606 00:08:22.606 Executing: test_invalid_db_write_overflow_cq 00:08:22.606 Waiting for AER completion... 00:08:22.606 Failure: test_invalid_db_write_overflow_cq 00:08:22.606 00:08:22.606 11:23:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:22.606 11:23:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:22.606 [2024-10-27 11:23:07.831929] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.577 Executing: test_write_invalid_db 00:08:32.577 Waiting for AER completion... 00:08:32.577 Failure: test_write_invalid_db 00:08:32.577 00:08:32.577 Executing: test_invalid_db_write_overflow_sq 00:08:32.577 Waiting for AER completion... 00:08:32.577 Failure: test_invalid_db_write_overflow_sq 00:08:32.577 00:08:32.577 Executing: test_invalid_db_write_overflow_cq 00:08:32.577 Waiting for AER completion... 00:08:32.577 Failure: test_invalid_db_write_overflow_cq 00:08:32.577 00:08:32.577 00:08:32.577 real 0m40.193s 00:08:32.577 user 0m34.207s 00:08:32.577 sys 0m5.629s 00:08:32.577 11:23:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.577 ************************************ 00:08:32.577 11:23:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:32.577 END TEST nvme_doorbell_aers 00:08:32.577 ************************************ 00:08:32.577 11:23:17 nvme -- nvme/nvme.sh@97 -- # uname 00:08:32.577 11:23:17 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:32.577 11:23:17 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:32.577 11:23:17 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:32.577 11:23:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.577 11:23:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.577 ************************************ 00:08:32.577 START TEST nvme_multi_aen 00:08:32.577 ************************************ 00:08:32.577 11:23:17 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:32.835 [2024-10-27 11:23:17.870874] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 [2024-10-27 11:23:17.871071] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 [2024-10-27 11:23:17.871086] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 [2024-10-27 11:23:17.872331] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 [2024-10-27 11:23:17.872364] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 [2024-10-27 11:23:17.872374] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 [2024-10-27 11:23:17.873463] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 [2024-10-27 11:23:17.873522] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 [2024-10-27 11:23:17.873595] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 [2024-10-27 11:23:17.874909] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 [2024-10-27 11:23:17.875050] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 [2024-10-27 11:23:17.875120] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63073) is not found. Dropping the request. 00:08:32.835 Child process pid: 63595 00:08:32.835 [Child] Asynchronous Event Request test 00:08:32.835 [Child] Attached to 0000:00:10.0 00:08:32.835 [Child] Attached to 0000:00:11.0 00:08:32.835 [Child] Attached to 0000:00:13.0 00:08:32.835 [Child] Attached to 0000:00:12.0 00:08:32.835 [Child] Registering asynchronous event callbacks... 00:08:32.835 [Child] Getting orig temperature thresholds of all controllers 00:08:32.835 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:32.835 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:32.835 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:32.835 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:32.835 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:32.835 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:32.835 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:32.835 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:32.835 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:32.835 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.835 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.835 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.835 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.835 [Child] Cleaning up... 00:08:32.835 Asynchronous Event Request test 00:08:32.835 Attached to 0000:00:10.0 00:08:32.835 Attached to 0000:00:11.0 00:08:32.835 Attached to 0000:00:13.0 00:08:32.835 Attached to 0000:00:12.0 00:08:32.835 Reset controller to setup AER completions for this process 00:08:32.835 Registering asynchronous event callbacks... 00:08:32.835 Getting orig temperature thresholds of all controllers 00:08:32.835 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:32.835 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:32.835 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:32.835 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:32.835 Setting all controllers temperature threshold low to trigger AER 00:08:32.835 Waiting for all controllers temperature threshold to be set lower 00:08:32.835 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:32.835 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:32.835 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:32.835 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:32.835 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:32.835 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:32.835 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:32.835 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:32.835 Waiting for all controllers to trigger AER and reset threshold 00:08:32.835 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.835 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.836 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.836 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:32.836 Cleaning up... 00:08:32.836 00:08:32.836 real 0m0.399s 00:08:32.836 user 0m0.138s 00:08:32.836 sys 0m0.171s 00:08:32.836 11:23:18 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.836 11:23:18 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:32.836 ************************************ 00:08:32.836 END TEST nvme_multi_aen 00:08:32.836 ************************************ 00:08:33.092 11:23:18 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:33.092 11:23:18 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:33.092 11:23:18 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.092 11:23:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.092 ************************************ 00:08:33.092 START TEST nvme_startup 00:08:33.092 ************************************ 00:08:33.092 11:23:18 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:33.092 Initializing NVMe Controllers 00:08:33.092 Attached to 0000:00:10.0 00:08:33.092 Attached to 0000:00:11.0 00:08:33.092 Attached to 0000:00:13.0 00:08:33.092 Attached to 0000:00:12.0 00:08:33.092 Initialization complete. 00:08:33.092 Time used:142017.109 (us). 00:08:33.092 00:08:33.092 real 0m0.204s 00:08:33.092 user 0m0.076s 00:08:33.092 sys 0m0.085s 00:08:33.092 ************************************ 00:08:33.092 END TEST nvme_startup 00:08:33.092 ************************************ 00:08:33.092 11:23:18 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.092 11:23:18 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:33.350 11:23:18 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:33.350 11:23:18 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.350 11:23:18 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.350 11:23:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.350 ************************************ 00:08:33.350 START TEST nvme_multi_secondary 00:08:33.350 ************************************ 00:08:33.350 11:23:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:33.350 11:23:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63651 00:08:33.350 11:23:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63652 00:08:33.350 11:23:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:33.350 11:23:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:33.350 11:23:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:36.632 Initializing NVMe Controllers 00:08:36.632 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.632 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.632 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.632 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.632 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:36.632 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:36.632 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:36.632 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:36.632 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:36.632 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:36.632 Initialization complete. Launching workers. 00:08:36.632 ======================================================== 00:08:36.632 Latency(us) 00:08:36.632 Device Information : IOPS MiB/s Average min max 00:08:36.632 PCIE (0000:00:10.0) NSID 1 from core 1: 7809.38 30.51 2047.51 1026.05 6530.66 00:08:36.632 PCIE (0000:00:11.0) NSID 1 from core 1: 7809.38 30.51 2048.48 1000.81 6496.38 00:08:36.632 PCIE (0000:00:13.0) NSID 1 from core 1: 7809.38 30.51 2048.62 1026.51 5929.70 00:08:36.632 PCIE (0000:00:12.0) NSID 1 from core 1: 7814.71 30.53 2047.17 961.28 5484.21 00:08:36.632 PCIE (0000:00:12.0) NSID 2 from core 1: 7809.38 30.51 2048.55 1034.10 5616.96 00:08:36.632 PCIE (0000:00:12.0) NSID 3 from core 1: 7809.38 30.51 2048.52 993.72 5489.14 00:08:36.632 ======================================================== 00:08:36.632 Total : 46861.59 183.05 2048.14 961.28 6530.66 00:08:36.632 00:08:36.632 Initializing NVMe Controllers 00:08:36.632 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.632 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.632 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.632 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.632 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:36.632 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:36.632 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:36.632 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:36.632 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:36.632 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:36.632 Initialization complete. Launching workers. 00:08:36.632 ======================================================== 00:08:36.632 Latency(us) 00:08:36.632 Device Information : IOPS MiB/s Average min max 00:08:36.632 PCIE (0000:00:10.0) NSID 1 from core 2: 3002.30 11.73 5327.14 807.50 13861.28 00:08:36.632 PCIE (0000:00:11.0) NSID 1 from core 2: 3002.30 11.73 5328.94 1044.28 13380.72 00:08:36.632 PCIE (0000:00:13.0) NSID 1 from core 2: 3002.30 11.73 5328.13 1047.11 14052.44 00:08:36.632 PCIE (0000:00:12.0) NSID 1 from core 2: 3002.30 11.73 5328.96 1042.67 13617.53 00:08:36.632 PCIE (0000:00:12.0) NSID 2 from core 2: 3002.30 11.73 5328.86 1049.28 13707.00 00:08:36.632 PCIE (0000:00:12.0) NSID 3 from core 2: 3002.30 11.73 5328.80 1035.30 14124.43 00:08:36.632 ======================================================== 00:08:36.632 Total : 18013.80 70.37 5328.47 807.50 14124.43 00:08:36.632 00:08:36.632 11:23:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63651 00:08:38.536 Initializing NVMe Controllers 00:08:38.536 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:38.536 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:38.536 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:38.536 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:38.536 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:38.536 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:38.536 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:38.536 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:38.536 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:38.536 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:38.536 Initialization complete. Launching workers. 00:08:38.536 ======================================================== 00:08:38.536 Latency(us) 00:08:38.536 Device Information : IOPS MiB/s Average min max 00:08:38.536 PCIE (0000:00:10.0) NSID 1 from core 0: 10898.82 42.57 1466.84 672.83 5739.57 00:08:38.536 PCIE (0000:00:11.0) NSID 1 from core 0: 10898.82 42.57 1467.64 694.80 6422.43 00:08:38.536 PCIE (0000:00:13.0) NSID 1 from core 0: 10898.82 42.57 1467.62 663.67 6059.89 00:08:38.536 PCIE (0000:00:12.0) NSID 1 from core 0: 10898.82 42.57 1467.60 639.42 5991.74 00:08:38.536 PCIE (0000:00:12.0) NSID 2 from core 0: 10898.82 42.57 1467.54 619.51 5839.66 00:08:38.536 PCIE (0000:00:12.0) NSID 3 from core 0: 10902.02 42.59 1467.12 578.38 5647.48 00:08:38.536 ======================================================== 00:08:38.536 Total : 65396.10 255.45 1467.39 578.38 6422.43 00:08:38.536 00:08:38.536 11:23:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63652 00:08:38.536 11:23:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63721 00:08:38.536 11:23:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:38.536 11:23:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63722 00:08:38.536 11:23:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:38.536 11:23:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:41.874 Initializing NVMe Controllers 00:08:41.874 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:41.874 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.874 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.874 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.874 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:41.874 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:41.874 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:41.874 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:41.874 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:41.874 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:41.874 Initialization complete. Launching workers. 00:08:41.874 ======================================================== 00:08:41.874 Latency(us) 00:08:41.874 Device Information : IOPS MiB/s Average min max 00:08:41.874 PCIE (0000:00:10.0) NSID 1 from core 0: 7909.35 30.90 2021.60 685.84 6300.92 00:08:41.874 PCIE (0000:00:11.0) NSID 1 from core 0: 7909.35 30.90 2022.52 706.59 6699.47 00:08:41.874 PCIE (0000:00:13.0) NSID 1 from core 0: 7909.35 30.90 2022.57 707.88 6446.53 00:08:41.874 PCIE (0000:00:12.0) NSID 1 from core 0: 7909.35 30.90 2022.55 716.26 5942.73 00:08:41.874 PCIE (0000:00:12.0) NSID 2 from core 0: 7909.35 30.90 2022.72 717.64 6017.14 00:08:41.874 PCIE (0000:00:12.0) NSID 3 from core 0: 7909.35 30.90 2022.74 718.18 5986.48 00:08:41.874 ======================================================== 00:08:41.874 Total : 47456.13 185.38 2022.45 685.84 6699.47 00:08:41.874 00:08:41.874 Initializing NVMe Controllers 00:08:41.874 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:41.874 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.874 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.874 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.874 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:41.874 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:41.874 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:41.874 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:41.874 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:41.874 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:41.874 Initialization complete. Launching workers. 00:08:41.874 ======================================================== 00:08:41.874 Latency(us) 00:08:41.874 Device Information : IOPS MiB/s Average min max 00:08:41.874 PCIE (0000:00:10.0) NSID 1 from core 1: 7738.40 30.23 2066.27 722.46 5612.20 00:08:41.874 PCIE (0000:00:11.0) NSID 1 from core 1: 7738.40 30.23 2067.20 742.04 5636.33 00:08:41.874 PCIE (0000:00:13.0) NSID 1 from core 1: 7738.40 30.23 2067.21 725.66 6078.45 00:08:41.874 PCIE (0000:00:12.0) NSID 1 from core 1: 7738.40 30.23 2067.16 732.98 5576.32 00:08:41.874 PCIE (0000:00:12.0) NSID 2 from core 1: 7738.40 30.23 2067.10 722.40 5490.11 00:08:41.874 PCIE (0000:00:12.0) NSID 3 from core 1: 7738.40 30.23 2067.06 724.81 5883.67 00:08:41.874 ======================================================== 00:08:41.874 Total : 46430.41 181.37 2067.00 722.40 6078.45 00:08:41.874 00:08:43.776 Initializing NVMe Controllers 00:08:43.776 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:43.776 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:43.776 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:43.776 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:43.776 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:43.776 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:43.776 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:43.776 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:43.776 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:43.776 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:43.776 Initialization complete. Launching workers. 00:08:43.776 ======================================================== 00:08:43.776 Latency(us) 00:08:43.776 Device Information : IOPS MiB/s Average min max 00:08:43.776 PCIE (0000:00:10.0) NSID 1 from core 2: 4485.99 17.52 3564.61 776.14 13590.75 00:08:43.776 PCIE (0000:00:11.0) NSID 1 from core 2: 4485.99 17.52 3566.04 762.88 16096.12 00:08:43.776 PCIE (0000:00:13.0) NSID 1 from core 2: 4485.99 17.52 3565.79 726.00 15630.35 00:08:43.776 PCIE (0000:00:12.0) NSID 1 from core 2: 4485.99 17.52 3565.54 684.00 13082.11 00:08:43.776 PCIE (0000:00:12.0) NSID 2 from core 2: 4485.99 17.52 3566.02 658.72 13132.62 00:08:43.776 PCIE (0000:00:12.0) NSID 3 from core 2: 4485.99 17.52 3565.76 621.90 13735.54 00:08:43.776 ======================================================== 00:08:43.776 Total : 26915.97 105.14 3565.63 621.90 16096.12 00:08:43.776 00:08:43.776 ************************************ 00:08:43.776 END TEST nvme_multi_secondary 00:08:43.776 ************************************ 00:08:43.776 11:23:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63721 00:08:43.776 11:23:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63722 00:08:43.776 00:08:43.776 real 0m10.633s 00:08:43.776 user 0m18.416s 00:08:43.776 sys 0m0.637s 00:08:43.776 11:23:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.776 11:23:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:44.036 11:23:29 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:44.036 11:23:29 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:44.036 11:23:29 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/62678 ]] 00:08:44.036 11:23:29 nvme -- common/autotest_common.sh@1090 -- # kill 62678 00:08:44.036 11:23:29 nvme -- common/autotest_common.sh@1091 -- # wait 62678 00:08:44.036 [2024-10-27 11:23:29.058711] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.058995] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.059040] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.059064] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.062087] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.062154] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.062175] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.062198] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.064494] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.064526] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.064536] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.064546] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.065915] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.065952] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.065962] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 [2024-10-27 11:23:29.065973] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63594) is not found. Dropping the request. 00:08:44.036 11:23:29 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:44.036 11:23:29 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:44.036 11:23:29 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:44.036 11:23:29 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.036 11:23:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.036 11:23:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.036 ************************************ 00:08:44.036 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:44.036 ************************************ 00:08:44.036 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:44.036 * Looking for test storage... 00:08:44.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:44.036 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:44.036 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # lcov --version 00:08:44.036 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:44.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.295 --rc genhtml_branch_coverage=1 00:08:44.295 --rc genhtml_function_coverage=1 00:08:44.295 --rc genhtml_legend=1 00:08:44.295 --rc geninfo_all_blocks=1 00:08:44.295 --rc geninfo_unexecuted_blocks=1 00:08:44.295 00:08:44.295 ' 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:44.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.295 --rc genhtml_branch_coverage=1 00:08:44.295 --rc genhtml_function_coverage=1 00:08:44.295 --rc genhtml_legend=1 00:08:44.295 --rc geninfo_all_blocks=1 00:08:44.295 --rc geninfo_unexecuted_blocks=1 00:08:44.295 00:08:44.295 ' 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:44.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.295 --rc genhtml_branch_coverage=1 00:08:44.295 --rc genhtml_function_coverage=1 00:08:44.295 --rc genhtml_legend=1 00:08:44.295 --rc geninfo_all_blocks=1 00:08:44.295 --rc geninfo_unexecuted_blocks=1 00:08:44.295 00:08:44.295 ' 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:44.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.295 --rc genhtml_branch_coverage=1 00:08:44.295 --rc genhtml_function_coverage=1 00:08:44.295 --rc genhtml_legend=1 00:08:44.295 --rc geninfo_all_blocks=1 00:08:44.295 --rc geninfo_unexecuted_blocks=1 00:08:44.295 00:08:44.295 ' 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1505 -- # bdfs=() 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1505 -- # local bdfs 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1506 -- # bdfs=($(get_nvme_bdfs)) 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1494 -- # bdfs=() 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1494 -- # local bdfs 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:44.295 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # echo 0000:00:10.0 00:08:44.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63877 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63877 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 63877 ']' 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.296 11:23:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:44.296 [2024-10-27 11:23:29.491588] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:08:44.296 [2024-10-27 11:23:29.491850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63877 ] 00:08:44.559 [2024-10-27 11:23:29.660371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.559 [2024-10-27 11:23:29.760051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.559 [2024-10-27 11:23:29.760408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.559 [2024-10-27 11:23:29.760798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.559 [2024-10-27 11:23:29.760883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.129 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.129 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:45.129 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:45.129 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.129 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:45.388 nvme0n1 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ROC28.txt 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:45.388 true 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730028210 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63900 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:45.388 11:23:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:47.288 [2024-10-27 11:23:32.441780] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:47.288 [2024-10-27 11:23:32.442010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:47.288 [2024-10-27 11:23:32.442032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:47.288 [2024-10-27 11:23:32.442047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.288 [2024-10-27 11:23:32.443753] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:47.288 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63900 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63900 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63900 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ROC28.txt 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ROC28.txt 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63877 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 63877 ']' 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 63877 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63877 00:08:47.288 killing process with pid 63877 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63877' 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 63877 00:08:47.288 11:23:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 63877 00:08:48.661 11:23:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:48.661 11:23:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:48.661 00:08:48.661 real 0m4.513s 00:08:48.661 user 0m15.950s 00:08:48.661 sys 0m0.504s 00:08:48.661 ************************************ 00:08:48.661 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:48.661 ************************************ 00:08:48.661 11:23:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.661 11:23:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:48.661 11:23:33 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:48.661 11:23:33 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:48.661 11:23:33 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:48.661 11:23:33 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.661 11:23:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.661 ************************************ 00:08:48.661 START TEST nvme_fio 00:08:48.661 ************************************ 00:08:48.661 11:23:33 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:48.661 11:23:33 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:48.661 11:23:33 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:48.661 11:23:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:48.661 11:23:33 nvme.nvme_fio -- common/autotest_common.sh@1494 -- # bdfs=() 00:08:48.661 11:23:33 nvme.nvme_fio -- common/autotest_common.sh@1494 -- # local bdfs 00:08:48.661 11:23:33 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:48.661 11:23:33 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:48.661 11:23:33 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:08:48.661 11:23:33 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:08:48.661 11:23:33 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:48.661 11:23:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:48.661 11:23:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:48.662 11:23:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:48.662 11:23:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:48.662 11:23:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:48.919 11:23:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:48.919 11:23:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:49.178 11:23:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:49.178 11:23:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:49.178 11:23:34 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:49.178 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:49.178 fio-3.35 00:08:49.178 Starting 1 thread 00:08:55.734 00:08:55.734 test: (groupid=0, jobs=1): err= 0: pid=64036: Sun Oct 27 11:23:40 2024 00:08:55.734 read: IOPS=21.3k, BW=83.2MiB/s (87.3MB/s)(167MiB/2002msec) 00:08:55.734 slat (usec): min=3, max=419, avg= 5.16, stdev= 3.31 00:08:55.734 clat (usec): min=557, max=8573, avg=2997.35, stdev=983.52 00:08:55.734 lat (usec): min=560, max=8585, avg=3002.51, stdev=984.65 00:08:55.734 clat percentiles (usec): 00:08:55.734 | 1.00th=[ 1860], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2376], 00:08:55.734 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2769], 00:08:55.734 | 70.00th=[ 2966], 80.00th=[ 3359], 90.00th=[ 4490], 95.00th=[ 5342], 00:08:55.734 | 99.00th=[ 6390], 99.50th=[ 6849], 99.90th=[ 8029], 99.95th=[ 8291], 00:08:55.734 | 99.99th=[ 8455] 00:08:55.734 bw ( KiB/s): min=81944, max=90272, per=100.00%, avg=85728.00, stdev=4215.70, samples=3 00:08:55.734 iops : min=20486, max=22568, avg=21432.00, stdev=1053.92, samples=3 00:08:55.734 write: IOPS=21.1k, BW=82.6MiB/s (86.6MB/s)(165MiB/2002msec); 0 zone resets 00:08:55.734 slat (usec): min=3, max=325, avg= 5.31, stdev= 3.23 00:08:55.734 clat (usec): min=580, max=8884, avg=3008.75, stdev=983.37 00:08:55.734 lat (usec): min=584, max=8898, avg=3014.06, stdev=984.51 00:08:55.734 clat percentiles (usec): 00:08:55.734 | 1.00th=[ 1844], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2376], 00:08:55.734 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2802], 00:08:55.734 | 70.00th=[ 2966], 80.00th=[ 3359], 90.00th=[ 4490], 95.00th=[ 5342], 00:08:55.734 | 99.00th=[ 6390], 99.50th=[ 6849], 99.90th=[ 7898], 99.95th=[ 8225], 00:08:55.734 | 99.99th=[ 8455] 00:08:55.734 bw ( KiB/s): min=82232, max=89664, per=100.00%, avg=85882.67, stdev=3717.72, samples=3 00:08:55.734 iops : min=20558, max=22416, avg=21470.67, stdev=929.43, samples=3 00:08:55.734 lat (usec) : 750=0.01%, 1000=0.01% 00:08:55.734 lat (msec) : 2=1.64%, 4=85.25%, 10=13.09% 00:08:55.734 cpu : usr=98.60%, sys=0.25%, ctx=29, majf=0, minf=607 00:08:55.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:55.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.734 issued rwts: total=42647,42343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.734 00:08:55.734 Run status group 0 (all jobs): 00:08:55.734 READ: bw=83.2MiB/s (87.3MB/s), 83.2MiB/s-83.2MiB/s (87.3MB/s-87.3MB/s), io=167MiB (175MB), run=2002-2002msec 00:08:55.734 WRITE: bw=82.6MiB/s (86.6MB/s), 82.6MiB/s-82.6MiB/s (86.6MB/s-86.6MB/s), io=165MiB (173MB), run=2002-2002msec 00:08:55.734 ----------------------------------------------------- 00:08:55.734 Suppressions used: 00:08:55.734 count bytes template 00:08:55.734 1 32 /usr/src/fio/parse.c 00:08:55.734 1 8 libtcmalloc_minimal.so 00:08:55.734 ----------------------------------------------------- 00:08:55.734 00:08:55.734 11:23:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:55.734 11:23:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:55.734 11:23:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:55.734 11:23:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:55.734 11:23:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:55.734 11:23:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:55.734 11:23:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:55.734 11:23:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:55.734 11:23:40 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:55.991 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:55.991 fio-3.35 00:08:55.991 Starting 1 thread 00:09:02.550 00:09:02.550 test: (groupid=0, jobs=1): err= 0: pid=64094: Sun Oct 27 11:23:46 2024 00:09:02.550 read: IOPS=18.7k, BW=73.1MiB/s (76.6MB/s)(146MiB/2001msec) 00:09:02.550 slat (usec): min=3, max=144, avg= 5.55, stdev= 2.94 00:09:02.550 clat (usec): min=236, max=9962, avg=3396.54, stdev=1171.94 00:09:02.550 lat (usec): min=240, max=10016, avg=3402.09, stdev=1173.11 00:09:02.550 clat percentiles (usec): 00:09:02.550 | 1.00th=[ 2008], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:02.550 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2900], 60.00th=[ 3163], 00:09:02.550 | 70.00th=[ 3687], 80.00th=[ 4424], 90.00th=[ 5145], 95.00th=[ 5800], 00:09:02.550 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 8455], 99.95th=[ 8848], 00:09:02.550 | 99.99th=[ 9896] 00:09:02.550 bw ( KiB/s): min=71992, max=77624, per=99.11%, avg=74154.67, stdev=3034.86, samples=3 00:09:02.550 iops : min=17998, max=19406, avg=18538.67, stdev=758.72, samples=3 00:09:02.550 write: IOPS=18.7k, BW=73.1MiB/s (76.6MB/s)(146MiB/2001msec); 0 zone resets 00:09:02.550 slat (usec): min=3, max=137, avg= 5.69, stdev= 2.95 00:09:02.550 clat (usec): min=298, max=9886, avg=3418.53, stdev=1178.56 00:09:02.550 lat (usec): min=303, max=9905, avg=3424.22, stdev=1179.76 00:09:02.550 clat percentiles (usec): 00:09:02.550 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2540], 00:09:02.550 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 3163], 00:09:02.550 | 70.00th=[ 3687], 80.00th=[ 4424], 90.00th=[ 5211], 95.00th=[ 5866], 00:09:02.550 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 8291], 99.95th=[ 8717], 00:09:02.550 | 99.99th=[ 9503] 00:09:02.550 bw ( KiB/s): min=72040, max=77600, per=99.14%, avg=74192.00, stdev=2985.22, samples=3 00:09:02.550 iops : min=18010, max=19400, avg=18548.00, stdev=746.31, samples=3 00:09:02.550 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:09:02.550 lat (msec) : 2=0.89%, 4=73.16%, 10=25.91% 00:09:02.550 cpu : usr=98.85%, sys=0.10%, ctx=4, majf=0, minf=607 00:09:02.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:02.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:02.550 issued rwts: total=37428,37436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:02.550 00:09:02.550 Run status group 0 (all jobs): 00:09:02.550 READ: bw=73.1MiB/s (76.6MB/s), 73.1MiB/s-73.1MiB/s (76.6MB/s-76.6MB/s), io=146MiB (153MB), run=2001-2001msec 00:09:02.550 WRITE: bw=73.1MiB/s (76.6MB/s), 73.1MiB/s-73.1MiB/s (76.6MB/s-76.6MB/s), io=146MiB (153MB), run=2001-2001msec 00:09:02.550 ----------------------------------------------------- 00:09:02.550 Suppressions used: 00:09:02.550 count bytes template 00:09:02.550 1 32 /usr/src/fio/parse.c 00:09:02.550 1 8 libtcmalloc_minimal.so 00:09:02.550 ----------------------------------------------------- 00:09:02.550 00:09:02.550 11:23:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:02.550 11:23:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:02.550 11:23:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:02.550 11:23:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:02.550 11:23:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:02.550 11:23:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:02.550 11:23:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:02.550 11:23:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:02.550 11:23:47 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:02.550 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:02.550 fio-3.35 00:09:02.550 Starting 1 thread 00:09:09.114 00:09:09.114 test: (groupid=0, jobs=1): err= 0: pid=64155: Sun Oct 27 11:23:53 2024 00:09:09.114 read: IOPS=18.7k, BW=73.1MiB/s (76.7MB/s)(146MiB/2001msec) 00:09:09.114 slat (nsec): min=4219, max=79191, avg=5524.08, stdev=2851.46 00:09:09.114 clat (usec): min=428, max=11640, avg=3403.57, stdev=1176.45 00:09:09.114 lat (usec): min=433, max=11661, avg=3409.09, stdev=1177.54 00:09:09.114 clat percentiles (usec): 00:09:09.114 | 1.00th=[ 1958], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2540], 00:09:09.114 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3130], 00:09:09.114 | 70.00th=[ 3556], 80.00th=[ 4424], 90.00th=[ 5276], 95.00th=[ 5866], 00:09:09.114 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 8455], 99.95th=[ 8717], 00:09:09.114 | 99.99th=[11469] 00:09:09.114 bw ( KiB/s): min=68536, max=77676, per=99.67%, avg=74606.67, stdev=5257.46, samples=3 00:09:09.114 iops : min=17134, max=19419, avg=18651.67, stdev=1314.37, samples=3 00:09:09.114 write: IOPS=18.7k, BW=73.1MiB/s (76.7MB/s)(146MiB/2001msec); 0 zone resets 00:09:09.114 slat (nsec): min=4278, max=72386, avg=5646.65, stdev=2884.84 00:09:09.114 clat (usec): min=378, max=11568, avg=3415.99, stdev=1172.57 00:09:09.114 lat (usec): min=383, max=11577, avg=3421.63, stdev=1173.67 00:09:09.114 clat percentiles (usec): 00:09:09.114 | 1.00th=[ 1942], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:09.114 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2966], 60.00th=[ 3163], 00:09:09.114 | 70.00th=[ 3589], 80.00th=[ 4424], 90.00th=[ 5276], 95.00th=[ 5866], 00:09:09.114 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[ 8848], 00:09:09.114 | 99.99th=[11076] 00:09:09.114 bw ( KiB/s): min=68616, max=77996, per=99.75%, avg=74684.00, stdev=5262.39, samples=3 00:09:09.114 iops : min=17154, max=19499, avg=18671.00, stdev=1315.60, samples=3 00:09:09.114 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:09.114 lat (msec) : 2=1.15%, 4=73.85%, 10=24.93%, 20=0.03% 00:09:09.114 cpu : usr=98.90%, sys=0.05%, ctx=3, majf=0, minf=607 00:09:09.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:09.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.114 issued rwts: total=37446,37454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.114 00:09:09.114 Run status group 0 (all jobs): 00:09:09.114 READ: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=146MiB (153MB), run=2001-2001msec 00:09:09.114 WRITE: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=146MiB (153MB), run=2001-2001msec 00:09:09.114 ----------------------------------------------------- 00:09:09.114 Suppressions used: 00:09:09.114 count bytes template 00:09:09.114 1 32 /usr/src/fio/parse.c 00:09:09.114 1 8 libtcmalloc_minimal.so 00:09:09.114 ----------------------------------------------------- 00:09:09.114 00:09:09.114 11:23:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:09.114 11:23:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:09.114 11:23:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:09.114 11:23:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:09.114 11:23:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:09.114 11:23:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:09.377 11:23:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:09.377 11:23:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:09.377 11:23:54 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:09.377 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:09.377 fio-3.35 00:09:09.377 Starting 1 thread 00:09:19.379 00:09:19.379 test: (groupid=0, jobs=1): err= 0: pid=64219: Sun Oct 27 11:24:03 2024 00:09:19.379 read: IOPS=20.5k, BW=80.0MiB/s (83.8MB/s)(160MiB/2001msec) 00:09:19.379 slat (usec): min=3, max=512, avg= 5.30, stdev= 4.11 00:09:19.379 clat (usec): min=280, max=11631, avg=3117.21, stdev=1152.85 00:09:19.379 lat (usec): min=285, max=11673, avg=3122.52, stdev=1154.21 00:09:19.379 clat percentiles (usec): 00:09:19.379 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:19.379 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2704], 60.00th=[ 2835], 00:09:19.379 | 70.00th=[ 2999], 80.00th=[ 3490], 90.00th=[ 4817], 95.00th=[ 5932], 00:09:19.379 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 9503], 99.95th=[10421], 00:09:19.379 | 99.99th=[11600] 00:09:19.379 bw ( KiB/s): min=76672, max=86664, per=100.00%, avg=81973.33, stdev=5023.91, samples=3 00:09:19.379 iops : min=19168, max=21666, avg=20493.33, stdev=1255.98, samples=3 00:09:19.379 write: IOPS=20.4k, BW=79.8MiB/s (83.6MB/s)(160MiB/2001msec); 0 zone resets 00:09:19.379 slat (usec): min=3, max=332, avg= 5.40, stdev= 3.17 00:09:19.379 clat (usec): min=215, max=11571, avg=3122.19, stdev=1140.40 00:09:19.379 lat (usec): min=219, max=11586, avg=3127.60, stdev=1141.63 00:09:19.379 clat percentiles (usec): 00:09:19.379 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:19.379 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2835], 00:09:19.379 | 70.00th=[ 3032], 80.00th=[ 3458], 90.00th=[ 4817], 95.00th=[ 5866], 00:09:19.379 | 99.00th=[ 7046], 99.50th=[ 7439], 99.90th=[ 9503], 99.95th=[10421], 00:09:19.379 | 99.99th=[11469] 00:09:19.379 bw ( KiB/s): min=76536, max=86960, per=100.00%, avg=82037.33, stdev=5236.04, samples=3 00:09:19.379 iops : min=19134, max=21740, avg=20509.33, stdev=1309.01, samples=3 00:09:19.379 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:19.379 lat (msec) : 2=1.25%, 4=83.24%, 10=15.40%, 20=0.07% 00:09:19.380 cpu : usr=98.55%, sys=0.25%, ctx=17, majf=0, minf=606 00:09:19.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:19.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.380 issued rwts: total=40957,40859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.380 00:09:19.380 Run status group 0 (all jobs): 00:09:19.380 READ: bw=80.0MiB/s (83.8MB/s), 80.0MiB/s-80.0MiB/s (83.8MB/s-83.8MB/s), io=160MiB (168MB), run=2001-2001msec 00:09:19.380 WRITE: bw=79.8MiB/s (83.6MB/s), 79.8MiB/s-79.8MiB/s (83.6MB/s-83.6MB/s), io=160MiB (167MB), run=2001-2001msec 00:09:19.380 ----------------------------------------------------- 00:09:19.380 Suppressions used: 00:09:19.380 count bytes template 00:09:19.380 1 32 /usr/src/fio/parse.c 00:09:19.380 1 8 libtcmalloc_minimal.so 00:09:19.380 ----------------------------------------------------- 00:09:19.380 00:09:19.380 11:24:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:19.380 11:24:03 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:19.380 00:09:19.380 real 0m29.967s 00:09:19.380 user 0m16.379s 00:09:19.380 sys 0m25.143s 00:09:19.380 11:24:03 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.380 ************************************ 00:09:19.380 11:24:03 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:19.380 END TEST nvme_fio 00:09:19.380 ************************************ 00:09:19.380 00:09:19.380 real 1m38.646s 00:09:19.380 user 3m35.990s 00:09:19.380 sys 0m35.411s 00:09:19.380 11:24:03 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.380 ************************************ 00:09:19.380 END TEST nvme 00:09:19.380 11:24:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.380 ************************************ 00:09:19.380 11:24:03 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:19.380 11:24:03 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:19.380 11:24:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:19.380 11:24:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.380 11:24:03 -- common/autotest_common.sh@10 -- # set +x 00:09:19.380 ************************************ 00:09:19.380 START TEST nvme_scc 00:09:19.380 ************************************ 00:09:19.380 11:24:03 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:19.380 * Looking for test storage... 00:09:19.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:19.380 11:24:03 nvme_scc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:19.380 11:24:03 nvme_scc -- common/autotest_common.sh@1689 -- # lcov --version 00:09:19.380 11:24:03 nvme_scc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:19.380 11:24:03 nvme_scc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.380 11:24:03 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:19.380 11:24:04 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.380 11:24:04 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.380 11:24:04 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.380 11:24:04 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:19.380 11:24:04 nvme_scc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.380 11:24:04 nvme_scc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:19.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.380 --rc genhtml_branch_coverage=1 00:09:19.380 --rc genhtml_function_coverage=1 00:09:19.380 --rc genhtml_legend=1 00:09:19.380 --rc geninfo_all_blocks=1 00:09:19.380 --rc geninfo_unexecuted_blocks=1 00:09:19.380 00:09:19.380 ' 00:09:19.380 11:24:04 nvme_scc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:19.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.380 --rc genhtml_branch_coverage=1 00:09:19.380 --rc genhtml_function_coverage=1 00:09:19.380 --rc genhtml_legend=1 00:09:19.380 --rc geninfo_all_blocks=1 00:09:19.380 --rc geninfo_unexecuted_blocks=1 00:09:19.380 00:09:19.380 ' 00:09:19.380 11:24:04 nvme_scc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:19.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.380 --rc genhtml_branch_coverage=1 00:09:19.380 --rc genhtml_function_coverage=1 00:09:19.380 --rc genhtml_legend=1 00:09:19.380 --rc geninfo_all_blocks=1 00:09:19.380 --rc geninfo_unexecuted_blocks=1 00:09:19.380 00:09:19.380 ' 00:09:19.380 11:24:04 nvme_scc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:19.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.380 --rc genhtml_branch_coverage=1 00:09:19.380 --rc genhtml_function_coverage=1 00:09:19.380 --rc genhtml_legend=1 00:09:19.380 --rc geninfo_all_blocks=1 00:09:19.380 --rc geninfo_unexecuted_blocks=1 00:09:19.380 00:09:19.380 ' 00:09:19.380 11:24:04 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.380 11:24:04 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.380 11:24:04 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.380 11:24:04 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.380 11:24:04 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.380 11:24:04 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.380 11:24:04 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.380 11:24:04 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.380 11:24:04 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:19.380 11:24:04 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:19.380 11:24:04 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:19.380 11:24:04 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.380 11:24:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:19.380 11:24:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:19.380 11:24:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:19.380 11:24:04 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:19.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.380 Waiting for block devices as requested 00:09:19.380 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.380 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.641 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:19.641 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.947 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:24.947 11:24:09 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:24.947 11:24:09 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:24.947 11:24:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.947 11:24:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:24.947 11:24:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:24.947 11:24:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:24.947 11:24:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.947 11:24:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:24.947 11:24:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.947 11:24:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.948 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.949 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.950 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:24.951 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:24.952 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:24.953 11:24:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.953 11:24:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:24.953 11:24:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.953 11:24:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:24.953 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:24.954 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.955 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.956 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.957 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.958 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:24.959 11:24:10 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.959 11:24:10 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:24.959 11:24:10 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.959 11:24:10 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.959 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:24.960 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:24.961 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:24.962 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.963 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.964 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.965 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.966 11:24:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.967 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:24.968 11:24:10 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.968 11:24:10 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:24.968 11:24:10 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.968 11:24:10 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.968 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.969 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.233 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:25.234 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.235 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:25.236 11:24:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:25.236 11:24:10 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:25.237 11:24:10 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:25.237 11:24:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:25.237 11:24:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:25.237 11:24:10 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:25.498 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:26.070 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.070 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.070 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.070 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.332 11:24:11 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:26.332 11:24:11 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:26.332 11:24:11 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.332 11:24:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:26.332 ************************************ 00:09:26.332 START TEST nvme_simple_copy 00:09:26.332 ************************************ 00:09:26.332 11:24:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:26.593 Initializing NVMe Controllers 00:09:26.593 Attaching to 0000:00:10.0 00:09:26.593 Controller supports SCC. Attached to 0000:00:10.0 00:09:26.593 Namespace ID: 1 size: 6GB 00:09:26.593 Initialization complete. 00:09:26.593 00:09:26.594 Controller QEMU NVMe Ctrl (12340 ) 00:09:26.594 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:26.594 Namespace Block Size:4096 00:09:26.594 Writing LBAs 0 to 63 with Random Data 00:09:26.594 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:26.594 LBAs matching Written Data: 64 00:09:26.594 00:09:26.594 real 0m0.306s 00:09:26.594 user 0m0.126s 00:09:26.594 sys 0m0.077s 00:09:26.594 11:24:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.594 11:24:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:26.594 ************************************ 00:09:26.594 END TEST nvme_simple_copy 00:09:26.594 ************************************ 00:09:26.594 00:09:26.594 real 0m7.883s 00:09:26.594 user 0m1.131s 00:09:26.594 sys 0m1.427s 00:09:26.594 11:24:11 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.594 ************************************ 00:09:26.594 END TEST nvme_scc 00:09:26.594 ************************************ 00:09:26.594 11:24:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:26.594 11:24:11 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:26.594 11:24:11 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:26.594 11:24:11 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:26.594 11:24:11 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:26.594 11:24:11 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:26.594 11:24:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:26.594 11:24:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.594 11:24:11 -- common/autotest_common.sh@10 -- # set +x 00:09:26.594 ************************************ 00:09:26.594 START TEST nvme_fdp 00:09:26.594 ************************************ 00:09:26.594 11:24:11 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:26.594 * Looking for test storage... 00:09:26.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:26.594 11:24:11 nvme_fdp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:26.594 11:24:11 nvme_fdp -- common/autotest_common.sh@1689 -- # lcov --version 00:09:26.594 11:24:11 nvme_fdp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:26.855 11:24:11 nvme_fdp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:26.855 11:24:11 nvme_fdp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.855 11:24:11 nvme_fdp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.855 --rc genhtml_branch_coverage=1 00:09:26.855 --rc genhtml_function_coverage=1 00:09:26.855 --rc genhtml_legend=1 00:09:26.855 --rc geninfo_all_blocks=1 00:09:26.855 --rc geninfo_unexecuted_blocks=1 00:09:26.855 00:09:26.855 ' 00:09:26.855 11:24:11 nvme_fdp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.855 --rc genhtml_branch_coverage=1 00:09:26.855 --rc genhtml_function_coverage=1 00:09:26.855 --rc genhtml_legend=1 00:09:26.855 --rc geninfo_all_blocks=1 00:09:26.855 --rc geninfo_unexecuted_blocks=1 00:09:26.855 00:09:26.855 ' 00:09:26.855 11:24:11 nvme_fdp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.855 --rc genhtml_branch_coverage=1 00:09:26.855 --rc genhtml_function_coverage=1 00:09:26.855 --rc genhtml_legend=1 00:09:26.855 --rc geninfo_all_blocks=1 00:09:26.855 --rc geninfo_unexecuted_blocks=1 00:09:26.855 00:09:26.855 ' 00:09:26.855 11:24:11 nvme_fdp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.855 --rc genhtml_branch_coverage=1 00:09:26.855 --rc genhtml_function_coverage=1 00:09:26.855 --rc genhtml_legend=1 00:09:26.855 --rc geninfo_all_blocks=1 00:09:26.855 --rc geninfo_unexecuted_blocks=1 00:09:26.855 00:09:26.855 ' 00:09:26.855 11:24:11 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.855 11:24:11 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.855 11:24:11 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.855 11:24:11 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.855 11:24:11 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.855 11:24:11 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:26.855 11:24:11 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:26.855 11:24:11 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:26.855 11:24:11 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.855 11:24:11 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:27.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.379 Waiting for block devices as requested 00:09:27.379 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.379 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.379 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.641 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.954 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:32.954 11:24:17 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:32.954 11:24:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.954 11:24:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:32.954 11:24:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.954 11:24:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:32.954 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.955 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:32.956 11:24:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.956 11:24:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:32.956 11:24:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.956 11:24:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:32.956 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.957 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.958 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:32.959 11:24:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.959 11:24:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:32.959 11:24:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.959 11:24:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:32.959 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.960 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:32.961 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.962 11:24:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.963 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:32.964 11:24:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.964 11:24:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:32.964 11:24:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.964 11:24:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.964 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.965 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:32.966 11:24:18 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:32.966 11:24:18 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:32.966 11:24:18 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:32.966 11:24:18 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:32.966 11:24:18 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:33.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.800 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.800 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.800 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.061 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.061 11:24:19 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:34.062 11:24:19 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:34.062 11:24:19 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.062 11:24:19 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:34.062 ************************************ 00:09:34.062 START TEST nvme_flexible_data_placement 00:09:34.062 ************************************ 00:09:34.062 11:24:19 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:34.324 Initializing NVMe Controllers 00:09:34.324 Attaching to 0000:00:13.0 00:09:34.324 Controller supports FDP Attached to 0000:00:13.0 00:09:34.324 Namespace ID: 1 Endurance Group ID: 1 00:09:34.324 Initialization complete. 00:09:34.324 00:09:34.324 ================================== 00:09:34.324 == FDP tests for Namespace: #01 == 00:09:34.324 ================================== 00:09:34.324 00:09:34.324 Get Feature: FDP: 00:09:34.324 ================= 00:09:34.324 Enabled: Yes 00:09:34.324 FDP configuration Index: 0 00:09:34.324 00:09:34.324 FDP configurations log page 00:09:34.324 =========================== 00:09:34.324 Number of FDP configurations: 1 00:09:34.324 Version: 0 00:09:34.324 Size: 112 00:09:34.324 FDP Configuration Descriptor: 0 00:09:34.324 Descriptor Size: 96 00:09:34.324 Reclaim Group Identifier format: 2 00:09:34.324 FDP Volatile Write Cache: Not Present 00:09:34.324 FDP Configuration: Valid 00:09:34.324 Vendor Specific Size: 0 00:09:34.324 Number of Reclaim Groups: 2 00:09:34.324 Number of Recalim Unit Handles: 8 00:09:34.324 Max Placement Identifiers: 128 00:09:34.324 Number of Namespaces Suppprted: 256 00:09:34.324 Reclaim unit Nominal Size: 6000000 bytes 00:09:34.324 Estimated Reclaim Unit Time Limit: Not Reported 00:09:34.325 RUH Desc #000: RUH Type: Initially Isolated 00:09:34.325 RUH Desc #001: RUH Type: Initially Isolated 00:09:34.325 RUH Desc #002: RUH Type: Initially Isolated 00:09:34.325 RUH Desc #003: RUH Type: Initially Isolated 00:09:34.325 RUH Desc #004: RUH Type: Initially Isolated 00:09:34.325 RUH Desc #005: RUH Type: Initially Isolated 00:09:34.325 RUH Desc #006: RUH Type: Initially Isolated 00:09:34.325 RUH Desc #007: RUH Type: Initially Isolated 00:09:34.325 00:09:34.325 FDP reclaim unit handle usage log page 00:09:34.325 ====================================== 00:09:34.325 Number of Reclaim Unit Handles: 8 00:09:34.325 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:34.325 RUH Usage Desc #001: RUH Attributes: Unused 00:09:34.325 RUH Usage Desc #002: RUH Attributes: Unused 00:09:34.325 RUH Usage Desc #003: RUH Attributes: Unused 00:09:34.325 RUH Usage Desc #004: RUH Attributes: Unused 00:09:34.325 RUH Usage Desc #005: RUH Attributes: Unused 00:09:34.325 RUH Usage Desc #006: RUH Attributes: Unused 00:09:34.325 RUH Usage Desc #007: RUH Attributes: Unused 00:09:34.325 00:09:34.325 FDP statistics log page 00:09:34.325 ======================= 00:09:34.325 Host bytes with metadata written: 1055023104 00:09:34.325 Media bytes with metadata written: 1055199232 00:09:34.325 Media bytes erased: 0 00:09:34.325 00:09:34.325 FDP Reclaim unit handle status 00:09:34.325 ============================== 00:09:34.325 Number of RUHS descriptors: 2 00:09:34.325 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000031da 00:09:34.325 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:34.325 00:09:34.325 FDP write on placement id: 0 success 00:09:34.325 00:09:34.325 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:34.325 00:09:34.325 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:34.325 00:09:34.325 Get Feature: FDP Events for Placement handle: #0 00:09:34.325 ======================== 00:09:34.325 Number of FDP Events: 6 00:09:34.325 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:34.325 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:34.325 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:34.325 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:34.325 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:34.325 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:34.325 00:09:34.325 FDP events log page 00:09:34.325 =================== 00:09:34.325 Number of FDP events: 1 00:09:34.325 FDP Event #0: 00:09:34.325 Event Type: RU Not Written to Capacity 00:09:34.325 Placement Identifier: Valid 00:09:34.325 NSID: Valid 00:09:34.325 Location: Valid 00:09:34.325 Placement Identifier: 0 00:09:34.325 Event Timestamp: 10 00:09:34.325 Namespace Identifier: 1 00:09:34.325 Reclaim Group Identifier: 0 00:09:34.325 Reclaim Unit Handle Identifier: 0 00:09:34.325 00:09:34.325 FDP test passed 00:09:34.325 00:09:34.325 real 0m0.263s 00:09:34.325 user 0m0.091s 00:09:34.325 sys 0m0.070s 00:09:34.325 ************************************ 00:09:34.325 END TEST nvme_flexible_data_placement 00:09:34.325 ************************************ 00:09:34.325 11:24:19 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.325 11:24:19 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:34.325 00:09:34.325 real 0m7.720s 00:09:34.325 user 0m1.016s 00:09:34.325 sys 0m1.451s 00:09:34.325 11:24:19 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.325 11:24:19 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:34.325 ************************************ 00:09:34.325 END TEST nvme_fdp 00:09:34.325 ************************************ 00:09:34.325 11:24:19 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:34.325 11:24:19 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:34.325 11:24:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:34.325 11:24:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.325 11:24:19 -- common/autotest_common.sh@10 -- # set +x 00:09:34.325 ************************************ 00:09:34.325 START TEST nvme_rpc 00:09:34.325 ************************************ 00:09:34.325 11:24:19 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:34.591 * Looking for test storage... 00:09:34.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.591 11:24:19 nvme_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:34.591 11:24:19 nvme_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:09:34.591 11:24:19 nvme_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:34.591 11:24:19 nvme_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:34.591 11:24:19 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.592 11:24:19 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:34.592 11:24:19 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.592 11:24:19 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:34.592 11:24:19 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:34.592 11:24:19 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.592 11:24:19 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:34.592 11:24:19 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.592 11:24:19 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.592 11:24:19 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.592 11:24:19 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:34.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.592 --rc genhtml_branch_coverage=1 00:09:34.592 --rc genhtml_function_coverage=1 00:09:34.592 --rc genhtml_legend=1 00:09:34.592 --rc geninfo_all_blocks=1 00:09:34.592 --rc geninfo_unexecuted_blocks=1 00:09:34.592 00:09:34.592 ' 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:34.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.592 --rc genhtml_branch_coverage=1 00:09:34.592 --rc genhtml_function_coverage=1 00:09:34.592 --rc genhtml_legend=1 00:09:34.592 --rc geninfo_all_blocks=1 00:09:34.592 --rc geninfo_unexecuted_blocks=1 00:09:34.592 00:09:34.592 ' 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:34.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.592 --rc genhtml_branch_coverage=1 00:09:34.592 --rc genhtml_function_coverage=1 00:09:34.592 --rc genhtml_legend=1 00:09:34.592 --rc geninfo_all_blocks=1 00:09:34.592 --rc geninfo_unexecuted_blocks=1 00:09:34.592 00:09:34.592 ' 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:34.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.592 --rc genhtml_branch_coverage=1 00:09:34.592 --rc genhtml_function_coverage=1 00:09:34.592 --rc genhtml_legend=1 00:09:34.592 --rc geninfo_all_blocks=1 00:09:34.592 --rc geninfo_unexecuted_blocks=1 00:09:34.592 00:09:34.592 ' 00:09:34.592 11:24:19 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.592 11:24:19 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1505 -- # bdfs=() 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1505 -- # local bdfs 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1506 -- # bdfs=($(get_nvme_bdfs)) 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1494 -- # bdfs=() 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1494 -- # local bdfs 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@1508 -- # echo 0000:00:10.0 00:09:34.592 11:24:19 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:34.592 11:24:19 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65579 00:09:34.592 11:24:19 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:34.592 11:24:19 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65579 00:09:34.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.592 11:24:19 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 65579 ']' 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.592 11:24:19 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.852 [2024-10-27 11:24:19.881583] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:09:34.852 [2024-10-27 11:24:19.881729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65579 ] 00:09:34.852 [2024-10-27 11:24:20.046249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:35.114 [2024-10-27 11:24:20.179682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.114 [2024-10-27 11:24:20.179774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.687 11:24:20 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.687 11:24:20 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:35.687 11:24:20 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:35.949 Nvme0n1 00:09:35.949 11:24:21 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:35.949 11:24:21 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:36.210 request: 00:09:36.210 { 00:09:36.210 "bdev_name": "Nvme0n1", 00:09:36.210 "filename": "non_existing_file", 00:09:36.210 "method": "bdev_nvme_apply_firmware", 00:09:36.210 "req_id": 1 00:09:36.210 } 00:09:36.210 Got JSON-RPC error response 00:09:36.210 response: 00:09:36.210 { 00:09:36.210 "code": -32603, 00:09:36.210 "message": "open file failed." 00:09:36.210 } 00:09:36.210 11:24:21 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:36.210 11:24:21 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:36.210 11:24:21 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:36.472 11:24:21 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:36.472 11:24:21 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65579 00:09:36.472 11:24:21 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 65579 ']' 00:09:36.472 11:24:21 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 65579 00:09:36.472 11:24:21 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:36.472 11:24:21 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.472 11:24:21 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65579 00:09:36.472 killing process with pid 65579 00:09:36.472 11:24:21 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.472 11:24:21 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.472 11:24:21 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65579' 00:09:36.472 11:24:21 nvme_rpc -- common/autotest_common.sh@969 -- # kill 65579 00:09:36.472 11:24:21 nvme_rpc -- common/autotest_common.sh@974 -- # wait 65579 00:09:37.861 00:09:37.861 real 0m3.510s 00:09:37.861 user 0m6.588s 00:09:37.861 sys 0m0.628s 00:09:37.861 11:24:23 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.861 11:24:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.861 ************************************ 00:09:37.861 END TEST nvme_rpc 00:09:37.861 ************************************ 00:09:37.861 11:24:23 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:37.861 11:24:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:37.861 11:24:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.861 11:24:23 -- common/autotest_common.sh@10 -- # set +x 00:09:38.123 ************************************ 00:09:38.123 START TEST nvme_rpc_timeouts 00:09:38.123 ************************************ 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:38.123 * Looking for test storage... 00:09:38.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # lcov --version 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.123 11:24:23 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:38.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.123 --rc genhtml_branch_coverage=1 00:09:38.123 --rc genhtml_function_coverage=1 00:09:38.123 --rc genhtml_legend=1 00:09:38.123 --rc geninfo_all_blocks=1 00:09:38.123 --rc geninfo_unexecuted_blocks=1 00:09:38.123 00:09:38.123 ' 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:38.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.123 --rc genhtml_branch_coverage=1 00:09:38.123 --rc genhtml_function_coverage=1 00:09:38.123 --rc genhtml_legend=1 00:09:38.123 --rc geninfo_all_blocks=1 00:09:38.123 --rc geninfo_unexecuted_blocks=1 00:09:38.123 00:09:38.123 ' 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:38.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.123 --rc genhtml_branch_coverage=1 00:09:38.123 --rc genhtml_function_coverage=1 00:09:38.123 --rc genhtml_legend=1 00:09:38.123 --rc geninfo_all_blocks=1 00:09:38.123 --rc geninfo_unexecuted_blocks=1 00:09:38.123 00:09:38.123 ' 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:38.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.123 --rc genhtml_branch_coverage=1 00:09:38.123 --rc genhtml_function_coverage=1 00:09:38.123 --rc genhtml_legend=1 00:09:38.123 --rc geninfo_all_blocks=1 00:09:38.123 --rc geninfo_unexecuted_blocks=1 00:09:38.123 00:09:38.123 ' 00:09:38.123 11:24:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.123 11:24:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65650 00:09:38.123 11:24:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65650 00:09:38.123 11:24:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65682 00:09:38.123 11:24:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:38.123 11:24:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65682 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 65682 ']' 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.123 11:24:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:38.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.123 11:24:23 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:38.123 [2024-10-27 11:24:23.351633] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:09:38.123 [2024-10-27 11:24:23.352148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65682 ] 00:09:38.384 [2024-10-27 11:24:23.508910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:38.384 [2024-10-27 11:24:23.629503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.384 [2024-10-27 11:24:23.629585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.957 11:24:24 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.957 11:24:24 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:38.957 Checking default timeout settings: 00:09:38.957 11:24:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:38.957 11:24:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:39.530 Making settings changes with rpc: 00:09:39.530 11:24:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:39.530 11:24:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:39.530 Check default vs. modified settings: 00:09:39.530 11:24:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:39.530 11:24:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65650 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65650 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:39.819 Setting action_on_timeout is changed as expected. 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:39.819 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:39.820 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65650 00:09:39.820 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:39.820 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65650 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:40.082 Setting timeout_us is changed as expected. 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65650 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65650 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:40.082 Setting timeout_admin_us is changed as expected. 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65650 /tmp/settings_modified_65650 00:09:40.082 11:24:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65682 00:09:40.082 11:24:25 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 65682 ']' 00:09:40.082 11:24:25 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 65682 00:09:40.082 11:24:25 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:40.082 11:24:25 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.082 11:24:25 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65682 00:09:40.082 killing process with pid 65682 00:09:40.082 11:24:25 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:40.082 11:24:25 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:40.082 11:24:25 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65682' 00:09:40.082 11:24:25 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 65682 00:09:40.082 11:24:25 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 65682 00:09:41.469 RPC TIMEOUT SETTING TEST PASSED. 00:09:41.469 11:24:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:41.469 00:09:41.469 real 0m3.204s 00:09:41.469 user 0m6.235s 00:09:41.469 sys 0m0.477s 00:09:41.469 11:24:26 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.469 11:24:26 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:41.469 ************************************ 00:09:41.469 END TEST nvme_rpc_timeouts 00:09:41.469 ************************************ 00:09:41.469 11:24:26 -- spdk/autotest.sh@239 -- # uname -s 00:09:41.469 11:24:26 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:41.469 11:24:26 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:41.469 11:24:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:41.469 11:24:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.469 11:24:26 -- common/autotest_common.sh@10 -- # set +x 00:09:41.469 ************************************ 00:09:41.469 START TEST sw_hotplug 00:09:41.469 ************************************ 00:09:41.469 11:24:26 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:41.469 * Looking for test storage... 00:09:41.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:41.469 11:24:26 sw_hotplug -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:41.469 11:24:26 sw_hotplug -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:41.469 11:24:26 sw_hotplug -- common/autotest_common.sh@1689 -- # lcov --version 00:09:41.469 11:24:26 sw_hotplug -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.469 11:24:26 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:41.469 11:24:26 sw_hotplug -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.469 11:24:26 sw_hotplug -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:41.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.469 --rc genhtml_branch_coverage=1 00:09:41.469 --rc genhtml_function_coverage=1 00:09:41.469 --rc genhtml_legend=1 00:09:41.469 --rc geninfo_all_blocks=1 00:09:41.469 --rc geninfo_unexecuted_blocks=1 00:09:41.469 00:09:41.469 ' 00:09:41.470 11:24:26 sw_hotplug -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:41.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.470 --rc genhtml_branch_coverage=1 00:09:41.470 --rc genhtml_function_coverage=1 00:09:41.470 --rc genhtml_legend=1 00:09:41.470 --rc geninfo_all_blocks=1 00:09:41.470 --rc geninfo_unexecuted_blocks=1 00:09:41.470 00:09:41.470 ' 00:09:41.470 11:24:26 sw_hotplug -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:41.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.470 --rc genhtml_branch_coverage=1 00:09:41.470 --rc genhtml_function_coverage=1 00:09:41.470 --rc genhtml_legend=1 00:09:41.470 --rc geninfo_all_blocks=1 00:09:41.470 --rc geninfo_unexecuted_blocks=1 00:09:41.470 00:09:41.470 ' 00:09:41.470 11:24:26 sw_hotplug -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:41.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.470 --rc genhtml_branch_coverage=1 00:09:41.470 --rc genhtml_function_coverage=1 00:09:41.470 --rc genhtml_legend=1 00:09:41.470 --rc geninfo_all_blocks=1 00:09:41.470 --rc geninfo_unexecuted_blocks=1 00:09:41.470 00:09:41.470 ' 00:09:41.470 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:41.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.733 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:41.733 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:41.733 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:41.733 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:41.733 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:41.733 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:41.733 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:41.733 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:41.733 11:24:26 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:41.733 11:24:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:41.995 11:24:27 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:41.995 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:41.995 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:41.995 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:42.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.257 Waiting for block devices as requested 00:09:42.257 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.518 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.518 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.518 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.806 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:47.806 11:24:32 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:47.806 11:24:32 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:48.068 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:48.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:48.068 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:48.329 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:48.590 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:48.590 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:48.851 11:24:33 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:48.851 11:24:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:48.851 11:24:33 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:48.851 11:24:33 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:48.851 11:24:33 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66534 00:09:48.851 11:24:33 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:48.851 11:24:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:48.851 11:24:33 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:48.851 11:24:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:48.851 11:24:33 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:48.851 11:24:34 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:48.851 11:24:34 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:48.851 11:24:34 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:48.851 11:24:34 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:48.851 11:24:34 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:48.851 11:24:34 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:48.851 11:24:34 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:48.851 11:24:34 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:48.851 11:24:34 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:49.112 Initializing NVMe Controllers 00:09:49.112 Attaching to 0000:00:10.0 00:09:49.112 Attaching to 0000:00:11.0 00:09:49.112 Attached to 0000:00:10.0 00:09:49.112 Attached to 0000:00:11.0 00:09:49.112 Initialization complete. Starting I/O... 00:09:49.112 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:49.112 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:49.112 00:09:50.057 QEMU NVMe Ctrl (12340 ): 2292 I/Os completed (+2292) 00:09:50.057 QEMU NVMe Ctrl (12341 ): 2294 I/Os completed (+2294) 00:09:50.057 00:09:51.002 QEMU NVMe Ctrl (12340 ): 5140 I/Os completed (+2848) 00:09:51.002 QEMU NVMe Ctrl (12341 ): 5142 I/Os completed (+2848) 00:09:51.002 00:09:51.946 QEMU NVMe Ctrl (12340 ): 7960 I/Os completed (+2820) 00:09:51.946 QEMU NVMe Ctrl (12341 ): 7962 I/Os completed (+2820) 00:09:51.946 00:09:53.334 QEMU NVMe Ctrl (12340 ): 11412 I/Os completed (+3452) 00:09:53.334 QEMU NVMe Ctrl (12341 ): 11400 I/Os completed (+3438) 00:09:53.334 00:09:54.273 QEMU NVMe Ctrl (12340 ): 15152 I/Os completed (+3740) 00:09:54.273 QEMU NVMe Ctrl (12341 ): 15129 I/Os completed (+3729) 00:09:54.273 00:09:54.861 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:54.861 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:54.861 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:54.861 [2024-10-27 11:24:40.011137] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:54.861 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:54.861 [2024-10-27 11:24:40.014857] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.014997] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.015067] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.015119] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:54.861 [2024-10-27 11:24:40.020179] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.020259] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.020286] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.020324] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:54.861 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:54.861 [2024-10-27 11:24:40.040871] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:54.861 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:54.861 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:54.861 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:54.861 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:54.861 EAL: Scan for (pci) bus failed. 00:09:54.861 [2024-10-27 11:24:40.043696] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.043784] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.043853] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.043900] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:54.861 [2024-10-27 11:24:40.045806] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.045843] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.045858] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.861 [2024-10-27 11:24:40.045871] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:55.122 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:55.122 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:55.122 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:55.122 00:09:55.122 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:55.122 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:55.122 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:55.122 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:55.122 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:55.122 Attaching to 0000:00:10.0 00:09:55.122 Attached to 0000:00:10.0 00:09:55.122 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:55.122 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:55.122 11:24:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:55.122 Attaching to 0000:00:11.0 00:09:55.122 Attached to 0000:00:11.0 00:09:56.064 QEMU NVMe Ctrl (12340 ): 2604 I/Os completed (+2604) 00:09:56.064 QEMU NVMe Ctrl (12341 ): 2320 I/Os completed (+2320) 00:09:56.064 00:09:57.008 QEMU NVMe Ctrl (12340 ): 5348 I/Os completed (+2744) 00:09:57.008 QEMU NVMe Ctrl (12341 ): 5060 I/Os completed (+2740) 00:09:57.008 00:09:57.951 QEMU NVMe Ctrl (12340 ): 8850 I/Os completed (+3502) 00:09:57.951 QEMU NVMe Ctrl (12341 ): 8541 I/Os completed (+3481) 00:09:57.951 00:09:59.336 QEMU NVMe Ctrl (12340 ): 11626 I/Os completed (+2776) 00:09:59.336 QEMU NVMe Ctrl (12341 ): 11317 I/Os completed (+2776) 00:09:59.336 00:10:00.278 QEMU NVMe Ctrl (12340 ): 15164 I/Os completed (+3538) 00:10:00.278 QEMU NVMe Ctrl (12341 ): 14841 I/Os completed (+3524) 00:10:00.278 00:10:01.268 QEMU NVMe Ctrl (12340 ): 19119 I/Os completed (+3955) 00:10:01.268 QEMU NVMe Ctrl (12341 ): 18797 I/Os completed (+3956) 00:10:01.268 00:10:02.212 QEMU NVMe Ctrl (12340 ): 23083 I/Os completed (+3964) 00:10:02.212 QEMU NVMe Ctrl (12341 ): 22760 I/Os completed (+3963) 00:10:02.212 00:10:03.157 QEMU NVMe Ctrl (12340 ): 27046 I/Os completed (+3963) 00:10:03.157 QEMU NVMe Ctrl (12341 ): 26720 I/Os completed (+3960) 00:10:03.157 00:10:04.100 QEMU NVMe Ctrl (12340 ): 30804 I/Os completed (+3758) 00:10:04.100 QEMU NVMe Ctrl (12341 ): 30483 I/Os completed (+3763) 00:10:04.100 00:10:05.041 QEMU NVMe Ctrl (12340 ): 34520 I/Os completed (+3716) 00:10:05.041 QEMU NVMe Ctrl (12341 ): 34183 I/Os completed (+3700) 00:10:05.041 00:10:05.979 QEMU NVMe Ctrl (12340 ): 38238 I/Os completed (+3718) 00:10:05.979 QEMU NVMe Ctrl (12341 ): 37910 I/Os completed (+3727) 00:10:05.979 00:10:07.351 QEMU NVMe Ctrl (12340 ): 41933 I/Os completed (+3695) 00:10:07.351 QEMU NVMe Ctrl (12341 ): 41609 I/Os completed (+3699) 00:10:07.351 00:10:07.351 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:07.351 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:07.351 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:07.351 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:07.351 [2024-10-27 11:24:52.347109] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:07.351 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:07.351 [2024-10-27 11:24:52.348059] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.351 [2024-10-27 11:24:52.348091] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.351 [2024-10-27 11:24:52.348105] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.351 [2024-10-27 11:24:52.348122] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.351 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:07.351 [2024-10-27 11:24:52.350017] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.351 [2024-10-27 11:24:52.350057] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.351 [2024-10-27 11:24:52.350069] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.351 [2024-10-27 11:24:52.350081] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.351 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:07.351 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:07.351 [2024-10-27 11:24:52.371431] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:07.351 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:07.351 [2024-10-27 11:24:52.372282] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.352 [2024-10-27 11:24:52.372321] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.352 [2024-10-27 11:24:52.372336] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.352 [2024-10-27 11:24:52.372349] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.352 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:07.352 [2024-10-27 11:24:52.373692] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.352 [2024-10-27 11:24:52.373724] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.352 [2024-10-27 11:24:52.373735] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.352 [2024-10-27 11:24:52.373747] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:07.352 Attaching to 0000:00:10.0 00:10:07.352 Attached to 0000:00:10.0 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:07.352 11:24:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:07.352 Attaching to 0000:00:11.0 00:10:07.352 Attached to 0000:00:11.0 00:10:08.287 QEMU NVMe Ctrl (12340 ): 2532 I/Os completed (+2532) 00:10:08.287 QEMU NVMe Ctrl (12341 ): 2245 I/Os completed (+2245) 00:10:08.287 00:10:09.223 QEMU NVMe Ctrl (12340 ): 6239 I/Os completed (+3707) 00:10:09.223 QEMU NVMe Ctrl (12341 ): 5956 I/Os completed (+3711) 00:10:09.223 00:10:10.183 QEMU NVMe Ctrl (12340 ): 9765 I/Os completed (+3526) 00:10:10.183 QEMU NVMe Ctrl (12341 ): 9502 I/Os completed (+3546) 00:10:10.183 00:10:11.123 QEMU NVMe Ctrl (12340 ): 12947 I/Os completed (+3182) 00:10:11.123 QEMU NVMe Ctrl (12341 ): 12683 I/Os completed (+3181) 00:10:11.123 00:10:12.066 QEMU NVMe Ctrl (12340 ): 16655 I/Os completed (+3708) 00:10:12.066 QEMU NVMe Ctrl (12341 ): 16382 I/Os completed (+3699) 00:10:12.066 00:10:13.007 QEMU NVMe Ctrl (12340 ): 20431 I/Os completed (+3776) 00:10:13.007 QEMU NVMe Ctrl (12341 ): 20156 I/Os completed (+3774) 00:10:13.007 00:10:13.951 QEMU NVMe Ctrl (12340 ): 24224 I/Os completed (+3793) 00:10:13.951 QEMU NVMe Ctrl (12341 ): 23948 I/Os completed (+3792) 00:10:13.951 00:10:15.341 QEMU NVMe Ctrl (12340 ): 28001 I/Os completed (+3777) 00:10:15.341 QEMU NVMe Ctrl (12341 ): 27718 I/Os completed (+3770) 00:10:15.341 00:10:16.282 QEMU NVMe Ctrl (12340 ): 31504 I/Os completed (+3503) 00:10:16.282 QEMU NVMe Ctrl (12341 ): 31243 I/Os completed (+3525) 00:10:16.282 00:10:17.226 QEMU NVMe Ctrl (12340 ): 34424 I/Os completed (+2920) 00:10:17.226 QEMU NVMe Ctrl (12341 ): 34166 I/Os completed (+2923) 00:10:17.226 00:10:18.170 QEMU NVMe Ctrl (12340 ): 38138 I/Os completed (+3714) 00:10:18.170 QEMU NVMe Ctrl (12341 ): 37893 I/Os completed (+3727) 00:10:18.170 00:10:19.114 QEMU NVMe Ctrl (12340 ): 41827 I/Os completed (+3689) 00:10:19.114 QEMU NVMe Ctrl (12341 ): 41586 I/Os completed (+3693) 00:10:19.114 00:10:19.374 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:19.374 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:19.374 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.374 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.374 [2024-10-27 11:25:04.619660] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:19.374 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:19.374 [2024-10-27 11:25:04.620624] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.620666] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.620680] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.620694] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:19.374 [2024-10-27 11:25:04.622352] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.622387] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.622399] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.622410] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.374 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.374 [2024-10-27 11:25:04.643216] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:19.374 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:19.374 [2024-10-27 11:25:04.644083] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.644119] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.644133] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.644146] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:19.374 [2024-10-27 11:25:04.645486] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.645519] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.645533] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.374 [2024-10-27 11:25:04.645544] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:19.636 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:19.636 EAL: Scan for (pci) bus failed. 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:19.636 Attaching to 0000:00:10.0 00:10:19.636 Attached to 0000:00:10.0 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:19.636 11:25:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:19.636 Attaching to 0000:00:11.0 00:10:19.636 Attached to 0000:00:11.0 00:10:19.636 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:19.636 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:19.636 [2024-10-27 11:25:04.882831] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:31.870 11:25:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:31.870 11:25:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:31.870 11:25:16 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.87 00:10:31.870 11:25:16 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.87 00:10:31.870 11:25:16 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:31.870 11:25:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.87 00:10:31.870 11:25:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.87 2 00:10:31.870 remove_attach_helper took 42.87s to complete (handling 2 nvme drive(s)) 11:25:16 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:38.461 11:25:22 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66534 00:10:38.461 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66534) - No such process 00:10:38.461 11:25:22 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66534 00:10:38.461 11:25:22 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:38.461 11:25:22 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:38.461 11:25:22 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:38.461 11:25:22 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67083 00:10:38.461 11:25:22 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:38.461 11:25:22 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:38.461 11:25:22 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67083 00:10:38.461 11:25:22 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67083 ']' 00:10:38.461 11:25:22 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.461 11:25:22 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:38.461 11:25:22 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.461 11:25:22 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:38.461 11:25:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.461 [2024-10-27 11:25:22.976907] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:10:38.461 [2024-10-27 11:25:22.977062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67083 ] 00:10:38.461 [2024-10-27 11:25:23.136527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.461 [2024-10-27 11:25:23.251189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.723 11:25:23 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.723 11:25:23 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:38.723 11:25:23 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:38.723 11:25:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.723 11:25:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.723 11:25:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.723 11:25:23 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:38.723 11:25:23 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:38.723 11:25:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:38.723 11:25:23 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:38.723 11:25:23 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:38.723 11:25:23 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:38.723 11:25:23 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:38.723 11:25:23 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:38.723 11:25:23 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:38.723 11:25:23 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:38.723 11:25:23 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:38.723 11:25:23 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:38.723 11:25:23 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:45.311 11:25:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:45.311 11:25:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:45.311 11:25:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:45.311 11:25:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:45.311 11:25:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:45.311 11:25:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:45.311 11:25:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:45.311 11:25:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:45.311 11:25:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:45.311 11:25:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.311 11:25:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:45.311 11:25:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:45.311 11:25:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:45.311 11:25:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.311 11:25:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:45.311 11:25:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:45.311 [2024-10-27 11:25:30.034791] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:45.311 [2024-10-27 11:25:30.036121] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.311 [2024-10-27 11:25:30.036157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.311 [2024-10-27 11:25:30.036171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.311 [2024-10-27 11:25:30.036189] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.311 [2024-10-27 11:25:30.036197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.311 [2024-10-27 11:25:30.036205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.311 [2024-10-27 11:25:30.036213] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.311 [2024-10-27 11:25:30.036221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.311 [2024-10-27 11:25:30.036228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.311 [2024-10-27 11:25:30.036239] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.311 [2024-10-27 11:25:30.036246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.311 [2024-10-27 11:25:30.036254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.311 11:25:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:45.311 11:25:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:45.311 11:25:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:45.311 11:25:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:45.311 11:25:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:45.311 11:25:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:45.311 11:25:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.311 11:25:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:45.311 11:25:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.311 11:25:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:45.311 11:25:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:45.572 [2024-10-27 11:25:30.634787] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:45.572 [2024-10-27 11:25:30.635972] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.572 [2024-10-27 11:25:30.636001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.572 [2024-10-27 11:25:30.636013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.572 [2024-10-27 11:25:30.636028] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.572 [2024-10-27 11:25:30.636036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.572 [2024-10-27 11:25:30.636043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.572 [2024-10-27 11:25:30.636052] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.572 [2024-10-27 11:25:30.636059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.572 [2024-10-27 11:25:30.636066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.572 [2024-10-27 11:25:30.636073] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.572 [2024-10-27 11:25:30.636081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.572 [2024-10-27 11:25:30.636087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.833 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:45.833 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:45.833 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:45.833 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:45.833 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:45.833 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:45.833 11:25:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.833 11:25:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:45.833 11:25:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.833 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:45.833 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:46.094 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.094 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.094 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:46.094 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:46.094 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.094 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.094 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.094 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:46.094 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:46.094 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.094 11:25:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.361 11:25:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.361 11:25:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.361 11:25:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.361 11:25:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.361 11:25:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.361 11:25:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:58.361 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:58.361 [2024-10-27 11:25:43.434998] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:58.361 [2024-10-27 11:25:43.436194] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.361 [2024-10-27 11:25:43.436229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.361 [2024-10-27 11:25:43.436239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.361 [2024-10-27 11:25:43.436255] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.361 [2024-10-27 11:25:43.436262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.361 [2024-10-27 11:25:43.436271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.361 [2024-10-27 11:25:43.436278] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.361 [2024-10-27 11:25:43.436287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.361 [2024-10-27 11:25:43.436303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.361 [2024-10-27 11:25:43.436311] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.361 [2024-10-27 11:25:43.436318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.361 [2024-10-27 11:25:43.436325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.623 [2024-10-27 11:25:43.834995] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:58.623 [2024-10-27 11:25:43.836139] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.623 [2024-10-27 11:25:43.836171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.623 [2024-10-27 11:25:43.836183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.623 [2024-10-27 11:25:43.836195] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.623 [2024-10-27 11:25:43.836204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.624 [2024-10-27 11:25:43.836211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.624 [2024-10-27 11:25:43.836219] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.624 [2024-10-27 11:25:43.836226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.624 [2024-10-27 11:25:43.836233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.624 [2024-10-27 11:25:43.836240] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.624 [2024-10-27 11:25:43.836248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.624 [2024-10-27 11:25:43.836254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.885 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:58.885 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:58.885 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:58.885 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.885 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.885 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.885 11:25:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.885 11:25:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.885 11:25:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.885 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:58.885 11:25:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:58.885 11:25:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.885 11:25:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.885 11:25:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:58.885 11:25:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:58.885 11:25:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:58.885 11:25:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:58.885 11:25:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:58.885 11:25:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:59.146 11:25:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:59.146 11:25:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:59.146 11:25:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.380 11:25:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.380 11:25:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.380 11:25:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.380 11:25:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.380 11:25:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.380 11:25:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:11.380 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:11.380 [2024-10-27 11:25:56.335211] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:11.380 [2024-10-27 11:25:56.336417] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.380 [2024-10-27 11:25:56.336451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.380 [2024-10-27 11:25:56.336461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.380 [2024-10-27 11:25:56.336478] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.380 [2024-10-27 11:25:56.336485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.380 [2024-10-27 11:25:56.336494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.380 [2024-10-27 11:25:56.336501] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.380 [2024-10-27 11:25:56.336509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.380 [2024-10-27 11:25:56.336515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.380 [2024-10-27 11:25:56.336524] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.380 [2024-10-27 11:25:56.336530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.380 [2024-10-27 11:25:56.336538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.641 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:11.641 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:11.641 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:11.641 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.641 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.641 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.641 11:25:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.641 11:25:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.641 11:25:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.642 [2024-10-27 11:25:56.835210] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:11.642 [2024-10-27 11:25:56.836381] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.642 [2024-10-27 11:25:56.836411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.642 [2024-10-27 11:25:56.836422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.642 [2024-10-27 11:25:56.836433] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.642 [2024-10-27 11:25:56.836442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.642 [2024-10-27 11:25:56.836449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.642 [2024-10-27 11:25:56.836458] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.642 [2024-10-27 11:25:56.836464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.642 [2024-10-27 11:25:56.836474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.642 [2024-10-27 11:25:56.836481] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.642 [2024-10-27 11:25:56.836489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.642 [2024-10-27 11:25:56.836496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.642 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:11.642 11:25:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:12.214 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:12.214 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.214 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.214 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.214 11:25:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.214 11:25:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.214 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.214 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.214 11:25:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.214 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:12.214 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:12.214 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.214 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.214 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:12.475 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:12.475 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.475 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.475 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.475 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:12.475 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:12.475 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.475 11:25:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.71 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.71 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.71 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.71 2 00:11:24.714 remove_attach_helper took 45.71s to complete (handling 2 nvme drive(s)) 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:24.714 11:26:09 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:24.714 11:26:09 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.299 11:26:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.299 11:26:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.299 11:26:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:31.299 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:31.299 [2024-10-27 11:26:15.773349] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:31.299 [2024-10-27 11:26:15.774243] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.299 [2024-10-27 11:26:15.774279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.299 [2024-10-27 11:26:15.774289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.299 [2024-10-27 11:26:15.774316] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.299 [2024-10-27 11:26:15.774324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.299 [2024-10-27 11:26:15.774332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.299 [2024-10-27 11:26:15.774339] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.299 [2024-10-27 11:26:15.774347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.299 [2024-10-27 11:26:15.774354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.299 [2024-10-27 11:26:15.774362] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.299 [2024-10-27 11:26:15.774368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.299 [2024-10-27 11:26:15.774377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.299 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:31.299 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.299 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.299 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.299 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.299 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.299 11:26:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.299 11:26:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.299 [2024-10-27 11:26:16.273347] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:31.299 [2024-10-27 11:26:16.274205] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.299 [2024-10-27 11:26:16.274235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.299 [2024-10-27 11:26:16.274248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.299 [2024-10-27 11:26:16.274260] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.299 [2024-10-27 11:26:16.274269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.299 [2024-10-27 11:26:16.274276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.299 [2024-10-27 11:26:16.274285] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.300 [2024-10-27 11:26:16.274291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.300 [2024-10-27 11:26:16.274311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.300 [2024-10-27 11:26:16.274318] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.300 [2024-10-27 11:26:16.274325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.300 [2024-10-27 11:26:16.274332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.300 11:26:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.300 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:31.300 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:31.559 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:31.559 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.559 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.559 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.560 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.560 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.560 11:26:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.560 11:26:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.560 11:26:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.560 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:31.560 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:31.828 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.828 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.828 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:31.828 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:31.828 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.828 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.828 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.828 11:26:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:31.828 11:26:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:31.829 11:26:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.829 11:26:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.066 11:26:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.066 11:26:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.066 11:26:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.066 11:26:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.066 11:26:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.066 11:26:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.066 [2024-10-27 11:26:29.173587] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:44.066 [2024-10-27 11:26:29.174467] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.066 [2024-10-27 11:26:29.174503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.066 [2024-10-27 11:26:29.174514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.066 [2024-10-27 11:26:29.174531] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.066 [2024-10-27 11:26:29.174539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.066 [2024-10-27 11:26:29.174548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.066 [2024-10-27 11:26:29.174555] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.066 [2024-10-27 11:26:29.174563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.066 [2024-10-27 11:26:29.174569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.066 [2024-10-27 11:26:29.174577] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.066 [2024-10-27 11:26:29.174584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.066 [2024-10-27 11:26:29.174593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:44.066 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:44.639 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:44.639 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.639 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.639 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.639 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.639 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.639 11:26:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.639 11:26:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.639 11:26:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.639 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:44.639 11:26:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:44.639 [2024-10-27 11:26:29.773585] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:44.639 [2024-10-27 11:26:29.774448] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.639 [2024-10-27 11:26:29.774476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.639 [2024-10-27 11:26:29.774487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.639 [2024-10-27 11:26:29.774500] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.639 [2024-10-27 11:26:29.774510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.639 [2024-10-27 11:26:29.774518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.639 [2024-10-27 11:26:29.774526] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.639 [2024-10-27 11:26:29.774533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.639 [2024-10-27 11:26:29.774541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.639 [2024-10-27 11:26:29.774548] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.639 [2024-10-27 11:26:29.774556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.639 [2024-10-27 11:26:29.774563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.210 11:26:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.210 11:26:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.210 11:26:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:45.210 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:45.471 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:45.471 11:26:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:57.705 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:57.705 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:57.705 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.706 11:26:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.706 11:26:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.706 11:26:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.706 11:26:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.706 11:26:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.706 [2024-10-27 11:26:42.573837] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:57.706 [2024-10-27 11:26:42.574708] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.706 [2024-10-27 11:26:42.574736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.706 [2024-10-27 11:26:42.574747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.706 [2024-10-27 11:26:42.574764] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.706 [2024-10-27 11:26:42.574771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.706 [2024-10-27 11:26:42.574780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.706 [2024-10-27 11:26:42.574787] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.706 [2024-10-27 11:26:42.574797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.706 [2024-10-27 11:26:42.574803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.706 [2024-10-27 11:26:42.574812] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.706 [2024-10-27 11:26:42.574818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.706 [2024-10-27 11:26:42.574826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.706 11:26:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:57.706 11:26:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:57.706 [2024-10-27 11:26:42.973835] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:57.706 [2024-10-27 11:26:42.974672] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.706 [2024-10-27 11:26:42.974700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.706 [2024-10-27 11:26:42.974710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.706 [2024-10-27 11:26:42.974722] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.706 [2024-10-27 11:26:42.974731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.706 [2024-10-27 11:26:42.974738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.706 [2024-10-27 11:26:42.974747] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.706 [2024-10-27 11:26:42.974753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.706 [2024-10-27 11:26:42.974762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.706 [2024-10-27 11:26:42.974768] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.706 [2024-10-27 11:26:42.974778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.706 [2024-10-27 11:26:42.974784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.967 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:57.967 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.967 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.967 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.967 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.967 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.967 11:26:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.967 11:26:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.967 11:26:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.967 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:57.967 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:57.967 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:57.967 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:57.967 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:58.228 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:58.228 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.228 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.228 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.228 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:58.228 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:58.228 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.228 11:26:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:10.462 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:10.462 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:10.462 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:10.462 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.462 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.462 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.462 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:10.462 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.74 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.74 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:10.462 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.74 00:12:10.462 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.74 2 00:12:10.462 remove_attach_helper took 45.74s to complete (handling 2 nvme drive(s)) 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:10.462 11:26:55 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67083 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67083 ']' 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67083 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67083 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.462 killing process with pid 67083 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67083' 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67083 00:12:10.462 11:26:55 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67083 00:12:11.406 11:26:56 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:11.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:12.237 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:12.237 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:12.237 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.237 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.497 00:12:12.497 real 2m31.148s 00:12:12.497 user 1m52.799s 00:12:12.497 sys 0m17.009s 00:12:12.497 11:26:57 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.497 11:26:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.497 ************************************ 00:12:12.497 END TEST sw_hotplug 00:12:12.497 ************************************ 00:12:12.497 11:26:57 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:12.497 11:26:57 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:12.497 11:26:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:12.497 11:26:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.497 11:26:57 -- common/autotest_common.sh@10 -- # set +x 00:12:12.497 ************************************ 00:12:12.497 START TEST nvme_xnvme 00:12:12.497 ************************************ 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:12.497 * Looking for test storage... 00:12:12.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1689 -- # lcov --version 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:12.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.497 --rc genhtml_branch_coverage=1 00:12:12.497 --rc genhtml_function_coverage=1 00:12:12.497 --rc genhtml_legend=1 00:12:12.497 --rc geninfo_all_blocks=1 00:12:12.497 --rc geninfo_unexecuted_blocks=1 00:12:12.497 00:12:12.497 ' 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:12.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.497 --rc genhtml_branch_coverage=1 00:12:12.497 --rc genhtml_function_coverage=1 00:12:12.497 --rc genhtml_legend=1 00:12:12.497 --rc geninfo_all_blocks=1 00:12:12.497 --rc geninfo_unexecuted_blocks=1 00:12:12.497 00:12:12.497 ' 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:12.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.497 --rc genhtml_branch_coverage=1 00:12:12.497 --rc genhtml_function_coverage=1 00:12:12.497 --rc genhtml_legend=1 00:12:12.497 --rc geninfo_all_blocks=1 00:12:12.497 --rc geninfo_unexecuted_blocks=1 00:12:12.497 00:12:12.497 ' 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:12.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.497 --rc genhtml_branch_coverage=1 00:12:12.497 --rc genhtml_function_coverage=1 00:12:12.497 --rc genhtml_legend=1 00:12:12.497 --rc geninfo_all_blocks=1 00:12:12.497 --rc geninfo_unexecuted_blocks=1 00:12:12.497 00:12:12.497 ' 00:12:12.497 11:26:57 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.497 11:26:57 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.497 11:26:57 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.497 11:26:57 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.497 11:26:57 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.497 11:26:57 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:12.497 11:26:57 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.497 11:26:57 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.497 11:26:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:12.497 ************************************ 00:12:12.497 START TEST xnvme_to_malloc_dd_copy 00:12:12.497 ************************************ 00:12:12.497 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:12.497 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:12.497 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:12.497 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:12.758 11:26:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:12.758 { 00:12:12.758 "subsystems": [ 00:12:12.758 { 00:12:12.758 "subsystem": "bdev", 00:12:12.758 "config": [ 00:12:12.758 { 00:12:12.758 "params": { 00:12:12.758 "block_size": 512, 00:12:12.758 "num_blocks": 2097152, 00:12:12.758 "name": "malloc0" 00:12:12.758 }, 00:12:12.758 "method": "bdev_malloc_create" 00:12:12.758 }, 00:12:12.758 { 00:12:12.758 "params": { 00:12:12.758 "io_mechanism": "libaio", 00:12:12.758 "filename": "/dev/nullb0", 00:12:12.758 "name": "null0" 00:12:12.758 }, 00:12:12.758 "method": "bdev_xnvme_create" 00:12:12.758 }, 00:12:12.758 { 00:12:12.758 "method": "bdev_wait_for_examine" 00:12:12.758 } 00:12:12.758 ] 00:12:12.758 } 00:12:12.758 ] 00:12:12.758 } 00:12:12.758 [2024-10-27 11:26:57.842344] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:12:12.758 [2024-10-27 11:26:57.842429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68489 ] 00:12:12.758 [2024-10-27 11:26:57.984935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.018 [2024-10-27 11:26:58.059809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.930  [2024-10-27T11:27:00.884Z] Copying: 300/1024 [MB] (300 MBps) [2024-10-27T11:27:01.830Z] Copying: 600/1024 [MB] (299 MBps) [2024-10-27T11:27:02.401Z] Copying: 901/1024 [MB] (301 MBps) [2024-10-27T11:27:04.316Z] Copying: 1024/1024 [MB] (average 300 MBps) 00:12:19.035 00:12:19.035 11:27:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:19.035 11:27:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:19.035 11:27:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:19.035 11:27:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:19.035 { 00:12:19.035 "subsystems": [ 00:12:19.035 { 00:12:19.035 "subsystem": "bdev", 00:12:19.035 "config": [ 00:12:19.035 { 00:12:19.035 "params": { 00:12:19.035 "block_size": 512, 00:12:19.035 "num_blocks": 2097152, 00:12:19.035 "name": "malloc0" 00:12:19.035 }, 00:12:19.035 "method": "bdev_malloc_create" 00:12:19.035 }, 00:12:19.035 { 00:12:19.035 "params": { 00:12:19.035 "io_mechanism": "libaio", 00:12:19.035 "filename": "/dev/nullb0", 00:12:19.035 "name": "null0" 00:12:19.035 }, 00:12:19.035 "method": "bdev_xnvme_create" 00:12:19.035 }, 00:12:19.035 { 00:12:19.035 "method": "bdev_wait_for_examine" 00:12:19.035 } 00:12:19.035 ] 00:12:19.035 } 00:12:19.035 ] 00:12:19.035 } 00:12:19.035 [2024-10-27 11:27:04.171259] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:12:19.035 [2024-10-27 11:27:04.171402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68565 ] 00:12:19.296 [2024-10-27 11:27:04.329224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.296 [2024-10-27 11:27:04.418493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.215  [2024-10-27T11:27:07.436Z] Copying: 304/1024 [MB] (304 MBps) [2024-10-27T11:27:08.379Z] Copying: 611/1024 [MB] (306 MBps) [2024-10-27T11:27:08.641Z] Copying: 917/1024 [MB] (306 MBps) [2024-10-27T11:27:10.555Z] Copying: 1024/1024 [MB] (average 306 MBps) 00:12:25.274 00:12:25.274 11:27:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:25.274 11:27:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:25.274 11:27:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:25.274 11:27:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:25.274 11:27:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:25.274 11:27:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:25.274 { 00:12:25.274 "subsystems": [ 00:12:25.274 { 00:12:25.274 "subsystem": "bdev", 00:12:25.274 "config": [ 00:12:25.274 { 00:12:25.274 "params": { 00:12:25.274 "block_size": 512, 00:12:25.274 "num_blocks": 2097152, 00:12:25.274 "name": "malloc0" 00:12:25.274 }, 00:12:25.274 "method": "bdev_malloc_create" 00:12:25.274 }, 00:12:25.274 { 00:12:25.274 "params": { 00:12:25.274 "io_mechanism": "io_uring", 00:12:25.274 "filename": "/dev/nullb0", 00:12:25.274 "name": "null0" 00:12:25.274 }, 00:12:25.274 "method": "bdev_xnvme_create" 00:12:25.274 }, 00:12:25.274 { 00:12:25.274 "method": "bdev_wait_for_examine" 00:12:25.274 } 00:12:25.274 ] 00:12:25.274 } 00:12:25.274 ] 00:12:25.274 } 00:12:25.274 [2024-10-27 11:27:10.462344] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:12:25.274 [2024-10-27 11:27:10.462462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68641 ] 00:12:25.535 [2024-10-27 11:27:10.617762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.535 [2024-10-27 11:27:10.693246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.450  [2024-10-27T11:27:13.674Z] Copying: 311/1024 [MB] (311 MBps) [2024-10-27T11:27:14.616Z] Copying: 624/1024 [MB] (312 MBps) [2024-10-27T11:27:14.878Z] Copying: 937/1024 [MB] (312 MBps) [2024-10-27T11:27:16.791Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:12:31.510 00:12:31.510 11:27:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:31.510 11:27:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:31.510 11:27:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:31.510 11:27:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:31.510 { 00:12:31.510 "subsystems": [ 00:12:31.510 { 00:12:31.510 "subsystem": "bdev", 00:12:31.510 "config": [ 00:12:31.510 { 00:12:31.510 "params": { 00:12:31.510 "block_size": 512, 00:12:31.510 "num_blocks": 2097152, 00:12:31.510 "name": "malloc0" 00:12:31.510 }, 00:12:31.510 "method": "bdev_malloc_create" 00:12:31.510 }, 00:12:31.510 { 00:12:31.510 "params": { 00:12:31.510 "io_mechanism": "io_uring", 00:12:31.510 "filename": "/dev/nullb0", 00:12:31.510 "name": "null0" 00:12:31.510 }, 00:12:31.510 "method": "bdev_xnvme_create" 00:12:31.510 }, 00:12:31.510 { 00:12:31.510 "method": "bdev_wait_for_examine" 00:12:31.510 } 00:12:31.510 ] 00:12:31.510 } 00:12:31.510 ] 00:12:31.510 } 00:12:31.510 [2024-10-27 11:27:16.611017] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:12:31.510 [2024-10-27 11:27:16.611103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68717 ] 00:12:31.510 [2024-10-27 11:27:16.757708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.771 [2024-10-27 11:27:16.833919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.683  [2024-10-27T11:27:19.902Z] Copying: 316/1024 [MB] (316 MBps) [2024-10-27T11:27:20.843Z] Copying: 632/1024 [MB] (316 MBps) [2024-10-27T11:27:20.843Z] Copying: 948/1024 [MB] (315 MBps) [2024-10-27T11:27:22.757Z] Copying: 1024/1024 [MB] (average 316 MBps) 00:12:37.476 00:12:37.477 11:27:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:37.477 11:27:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:37.738 00:12:37.738 real 0m24.997s 00:12:37.738 user 0m22.149s 00:12:37.738 sys 0m2.346s 00:12:37.738 11:27:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.738 ************************************ 00:12:37.738 END TEST xnvme_to_malloc_dd_copy 00:12:37.738 ************************************ 00:12:37.738 11:27:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:37.738 11:27:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:37.738 11:27:22 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:37.738 11:27:22 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.738 11:27:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.738 ************************************ 00:12:37.738 START TEST xnvme_bdevperf 00:12:37.738 ************************************ 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.738 11:27:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.738 { 00:12:37.738 "subsystems": [ 00:12:37.738 { 00:12:37.738 "subsystem": "bdev", 00:12:37.738 "config": [ 00:12:37.738 { 00:12:37.738 "params": { 00:12:37.738 "io_mechanism": "libaio", 00:12:37.738 "filename": "/dev/nullb0", 00:12:37.738 "name": "null0" 00:12:37.738 }, 00:12:37.738 "method": "bdev_xnvme_create" 00:12:37.738 }, 00:12:37.738 { 00:12:37.738 "method": "bdev_wait_for_examine" 00:12:37.738 } 00:12:37.738 ] 00:12:37.738 } 00:12:37.738 ] 00:12:37.738 } 00:12:37.738 [2024-10-27 11:27:22.908023] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:12:37.738 [2024-10-27 11:27:22.908131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68821 ] 00:12:37.999 [2024-10-27 11:27:23.065331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.999 [2024-10-27 11:27:23.144530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.258 Running I/O for 5 seconds... 00:12:40.140 202624.00 IOPS, 791.50 MiB/s [2024-10-27T11:27:26.364Z] 202816.00 IOPS, 792.25 MiB/s [2024-10-27T11:27:27.750Z] 202816.00 IOPS, 792.25 MiB/s [2024-10-27T11:27:28.692Z] 202880.00 IOPS, 792.50 MiB/s 00:12:43.411 Latency(us) 00:12:43.411 [2024-10-27T11:27:28.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.411 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:43.411 null0 : 5.00 202835.62 792.33 0.00 0.00 313.21 105.55 1543.88 00:12:43.411 [2024-10-27T11:27:28.692Z] =================================================================================================================== 00:12:43.411 [2024-10-27T11:27:28.692Z] Total : 202835.62 792.33 0.00 0.00 313.21 105.55 1543.88 00:12:43.672 11:27:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:43.672 11:27:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:43.672 11:27:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:43.672 11:27:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:43.672 11:27:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:43.672 11:27:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:43.672 { 00:12:43.672 "subsystems": [ 00:12:43.672 { 00:12:43.672 "subsystem": "bdev", 00:12:43.672 "config": [ 00:12:43.672 { 00:12:43.672 "params": { 00:12:43.672 "io_mechanism": "io_uring", 00:12:43.672 "filename": "/dev/nullb0", 00:12:43.672 "name": "null0" 00:12:43.672 }, 00:12:43.672 "method": "bdev_xnvme_create" 00:12:43.672 }, 00:12:43.672 { 00:12:43.672 "method": "bdev_wait_for_examine" 00:12:43.672 } 00:12:43.672 ] 00:12:43.672 } 00:12:43.672 ] 00:12:43.672 } 00:12:43.934 [2024-10-27 11:27:28.965774] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:12:43.934 [2024-10-27 11:27:28.965891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68891 ] 00:12:43.934 [2024-10-27 11:27:29.126308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.228 [2024-10-27 11:27:29.242564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.487 Running I/O for 5 seconds... 00:12:46.368 182912.00 IOPS, 714.50 MiB/s [2024-10-27T11:27:32.650Z] 207040.00 IOPS, 808.75 MiB/s [2024-10-27T11:27:33.609Z] 215232.00 IOPS, 840.75 MiB/s [2024-10-27T11:27:34.553Z] 219376.00 IOPS, 856.94 MiB/s 00:12:49.272 Latency(us) 00:12:49.272 [2024-10-27T11:27:34.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.272 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:49.272 null0 : 5.00 221815.33 866.47 0.00 0.00 286.19 145.72 2394.58 00:12:49.272 [2024-10-27T11:27:34.553Z] =================================================================================================================== 00:12:49.272 [2024-10-27T11:27:34.553Z] Total : 221815.33 866.47 0.00 0.00 286.19 145.72 2394.58 00:12:49.843 11:27:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:49.843 11:27:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:49.843 00:12:49.843 real 0m12.262s 00:12:49.843 user 0m9.916s 00:12:49.843 sys 0m2.112s 00:12:49.843 11:27:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.843 ************************************ 00:12:49.843 END TEST xnvme_bdevperf 00:12:49.843 ************************************ 00:12:49.843 11:27:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:50.103 00:12:50.103 real 0m37.515s 00:12:50.103 user 0m32.177s 00:12:50.103 sys 0m4.580s 00:12:50.103 11:27:35 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.103 11:27:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.103 ************************************ 00:12:50.103 END TEST nvme_xnvme 00:12:50.103 ************************************ 00:12:50.103 11:27:35 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:50.103 11:27:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.103 11:27:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.103 11:27:35 -- common/autotest_common.sh@10 -- # set +x 00:12:50.103 ************************************ 00:12:50.103 START TEST blockdev_xnvme 00:12:50.103 ************************************ 00:12:50.103 11:27:35 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:50.103 * Looking for test storage... 00:12:50.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:50.103 11:27:35 blockdev_xnvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:50.103 11:27:35 blockdev_xnvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:50.103 11:27:35 blockdev_xnvme -- common/autotest_common.sh@1689 -- # lcov --version 00:12:50.103 11:27:35 blockdev_xnvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.103 11:27:35 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:50.103 11:27:35 blockdev_xnvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.103 11:27:35 blockdev_xnvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:50.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.103 --rc genhtml_branch_coverage=1 00:12:50.103 --rc genhtml_function_coverage=1 00:12:50.103 --rc genhtml_legend=1 00:12:50.103 --rc geninfo_all_blocks=1 00:12:50.103 --rc geninfo_unexecuted_blocks=1 00:12:50.103 00:12:50.103 ' 00:12:50.103 11:27:35 blockdev_xnvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:50.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.103 --rc genhtml_branch_coverage=1 00:12:50.103 --rc genhtml_function_coverage=1 00:12:50.103 --rc genhtml_legend=1 00:12:50.103 --rc geninfo_all_blocks=1 00:12:50.103 --rc geninfo_unexecuted_blocks=1 00:12:50.103 00:12:50.103 ' 00:12:50.103 11:27:35 blockdev_xnvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:50.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.103 --rc genhtml_branch_coverage=1 00:12:50.103 --rc genhtml_function_coverage=1 00:12:50.103 --rc genhtml_legend=1 00:12:50.103 --rc geninfo_all_blocks=1 00:12:50.103 --rc geninfo_unexecuted_blocks=1 00:12:50.103 00:12:50.103 ' 00:12:50.103 11:27:35 blockdev_xnvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:50.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.103 --rc genhtml_branch_coverage=1 00:12:50.103 --rc genhtml_function_coverage=1 00:12:50.103 --rc genhtml_legend=1 00:12:50.103 --rc geninfo_all_blocks=1 00:12:50.103 --rc geninfo_unexecuted_blocks=1 00:12:50.103 00:12:50.103 ' 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:50.103 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69033 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69033 00:12:50.104 11:27:35 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69033 ']' 00:12:50.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.104 11:27:35 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.104 11:27:35 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.104 11:27:35 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.104 11:27:35 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.104 11:27:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.104 11:27:35 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:50.363 [2024-10-27 11:27:35.421967] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:12:50.363 [2024-10-27 11:27:35.422113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69033 ] 00:12:50.363 [2024-10-27 11:27:35.578948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.623 [2024-10-27 11:27:35.653368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.195 11:27:36 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.195 11:27:36 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:51.195 11:27:36 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:51.195 11:27:36 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:51.195 11:27:36 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:51.195 11:27:36 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:51.195 11:27:36 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:51.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:51.457 Waiting for block devices as requested 00:12:51.457 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:51.718 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:51.718 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:51.718 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:57.012 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1654 -- # local nvme bdf 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n2 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n2 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n3 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n3 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3c3n1 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme3c3n1 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:57.012 nvme0n1 00:12:57.012 nvme1n1 00:12:57.012 nvme2n1 00:12:57.012 nvme2n2 00:12:57.012 nvme2n3 00:12:57.012 nvme3n1 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:57.012 11:27:41 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.012 11:27:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.012 11:27:42 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.012 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:57.012 11:27:42 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.012 11:27:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.012 11:27:42 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.012 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:57.012 11:27:42 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.012 11:27:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.012 11:27:42 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.013 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:57.013 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:57.013 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.013 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:57.013 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:57.013 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "be99615c-6711-43d3-91a9-6da611ef0066"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "be99615c-6711-43d3-91a9-6da611ef0066",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d25a5280-f5bd-4c2e-a9af-6411f036f83e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d25a5280-f5bd-4c2e-a9af-6411f036f83e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7ff13d2c-ace4-42ed-a015-e462cf45171b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7ff13d2c-ace4-42ed-a015-e462cf45171b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "f5baae82-1297-4f1d-9caa-757e8db9b19e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f5baae82-1297-4f1d-9caa-757e8db9b19e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "0fc66520-1d5f-4662-9bb9-7c421dc480c1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0fc66520-1d5f-4662-9bb9-7c421dc480c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6e3bda89-954e-45c0-9647-6f15948dd513"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6e3bda89-954e-45c0-9647-6f15948dd513",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:57.013 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:57.013 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:57.013 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:57.013 11:27:42 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69033 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69033 ']' 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69033 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69033 00:12:57.013 killing process with pid 69033 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69033' 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69033 00:12:57.013 11:27:42 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69033 00:12:58.398 11:27:43 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:58.398 11:27:43 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:58.398 11:27:43 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:58.398 11:27:43 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.398 11:27:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.398 ************************************ 00:12:58.398 START TEST bdev_hello_world 00:12:58.398 ************************************ 00:12:58.398 11:27:43 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:58.398 [2024-10-27 11:27:43.357737] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:12:58.398 [2024-10-27 11:27:43.357861] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69391 ] 00:12:58.398 [2024-10-27 11:27:43.514211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.398 [2024-10-27 11:27:43.598282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.660 [2024-10-27 11:27:43.895606] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:58.660 [2024-10-27 11:27:43.895803] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:58.660 [2024-10-27 11:27:43.895825] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:58.660 [2024-10-27 11:27:43.897716] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:58.660 [2024-10-27 11:27:43.898208] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:58.660 [2024-10-27 11:27:43.898227] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:58.660 [2024-10-27 11:27:43.899055] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:58.660 00:12:58.660 [2024-10-27 11:27:43.899184] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:59.603 00:12:59.603 real 0m1.362s 00:12:59.603 user 0m1.054s 00:12:59.603 sys 0m0.179s 00:12:59.603 11:27:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.603 ************************************ 00:12:59.603 END TEST bdev_hello_world 00:12:59.603 ************************************ 00:12:59.603 11:27:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:59.603 11:27:44 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:59.603 11:27:44 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:59.603 11:27:44 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.603 11:27:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.603 ************************************ 00:12:59.603 START TEST bdev_bounds 00:12:59.603 ************************************ 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:59.603 Process bdevio pid: 69423 00:12:59.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69423 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69423' 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69423 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 69423 ']' 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.603 11:27:44 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:59.603 [2024-10-27 11:27:44.792184] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:12:59.603 [2024-10-27 11:27:44.792355] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69423 ] 00:12:59.865 [2024-10-27 11:27:44.956335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:59.865 [2024-10-27 11:27:45.078601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.865 [2024-10-27 11:27:45.079337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.865 [2024-10-27 11:27:45.079387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.437 11:27:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.437 11:27:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:00.437 11:27:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:00.699 I/O targets: 00:13:00.699 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:00.699 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:00.699 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:00.699 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:00.699 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:00.699 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:00.699 00:13:00.699 00:13:00.699 CUnit - A unit testing framework for C - Version 2.1-3 00:13:00.699 http://cunit.sourceforge.net/ 00:13:00.699 00:13:00.699 00:13:00.699 Suite: bdevio tests on: nvme3n1 00:13:00.699 Test: blockdev write read block ...passed 00:13:00.699 Test: blockdev write zeroes read block ...passed 00:13:00.699 Test: blockdev write zeroes read no split ...passed 00:13:00.699 Test: blockdev write zeroes read split ...passed 00:13:00.699 Test: blockdev write zeroes read split partial ...passed 00:13:00.699 Test: blockdev reset ...passed 00:13:00.699 Test: blockdev write read 8 blocks ...passed 00:13:00.699 Test: blockdev write read size > 128k ...passed 00:13:00.699 Test: blockdev write read invalid size ...passed 00:13:00.699 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.699 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.699 Test: blockdev write read max offset ...passed 00:13:00.699 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.699 Test: blockdev writev readv 8 blocks ...passed 00:13:00.699 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.699 Test: blockdev writev readv block ...passed 00:13:00.699 Test: blockdev writev readv size > 128k ...passed 00:13:00.699 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.699 Test: blockdev comparev and writev ...passed 00:13:00.699 Test: blockdev nvme passthru rw ...passed 00:13:00.699 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.699 Test: blockdev nvme admin passthru ...passed 00:13:00.699 Test: blockdev copy ...passed 00:13:00.699 Suite: bdevio tests on: nvme2n3 00:13:00.699 Test: blockdev write read block ...passed 00:13:00.699 Test: blockdev write zeroes read block ...passed 00:13:00.699 Test: blockdev write zeroes read no split ...passed 00:13:00.699 Test: blockdev write zeroes read split ...passed 00:13:00.699 Test: blockdev write zeroes read split partial ...passed 00:13:00.699 Test: blockdev reset ...passed 00:13:00.699 Test: blockdev write read 8 blocks ...passed 00:13:00.699 Test: blockdev write read size > 128k ...passed 00:13:00.699 Test: blockdev write read invalid size ...passed 00:13:00.699 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.699 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.699 Test: blockdev write read max offset ...passed 00:13:00.699 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.699 Test: blockdev writev readv 8 blocks ...passed 00:13:00.699 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.699 Test: blockdev writev readv block ...passed 00:13:00.699 Test: blockdev writev readv size > 128k ...passed 00:13:00.699 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.699 Test: blockdev comparev and writev ...passed 00:13:00.699 Test: blockdev nvme passthru rw ...passed 00:13:00.699 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.699 Test: blockdev nvme admin passthru ...passed 00:13:00.699 Test: blockdev copy ...passed 00:13:00.699 Suite: bdevio tests on: nvme2n2 00:13:00.699 Test: blockdev write read block ...passed 00:13:00.699 Test: blockdev write zeroes read block ...passed 00:13:00.699 Test: blockdev write zeroes read no split ...passed 00:13:00.699 Test: blockdev write zeroes read split ...passed 00:13:00.699 Test: blockdev write zeroes read split partial ...passed 00:13:00.699 Test: blockdev reset ...passed 00:13:00.699 Test: blockdev write read 8 blocks ...passed 00:13:00.699 Test: blockdev write read size > 128k ...passed 00:13:00.699 Test: blockdev write read invalid size ...passed 00:13:00.699 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.699 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.699 Test: blockdev write read max offset ...passed 00:13:00.699 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.699 Test: blockdev writev readv 8 blocks ...passed 00:13:00.961 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.961 Test: blockdev writev readv block ...passed 00:13:00.961 Test: blockdev writev readv size > 128k ...passed 00:13:00.961 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.961 Test: blockdev comparev and writev ...passed 00:13:00.961 Test: blockdev nvme passthru rw ...passed 00:13:00.961 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.961 Test: blockdev nvme admin passthru ...passed 00:13:00.961 Test: blockdev copy ...passed 00:13:00.961 Suite: bdevio tests on: nvme2n1 00:13:00.961 Test: blockdev write read block ...passed 00:13:00.961 Test: blockdev write zeroes read block ...passed 00:13:00.961 Test: blockdev write zeroes read no split ...passed 00:13:00.961 Test: blockdev write zeroes read split ...passed 00:13:00.961 Test: blockdev write zeroes read split partial ...passed 00:13:00.961 Test: blockdev reset ...passed 00:13:00.961 Test: blockdev write read 8 blocks ...passed 00:13:00.961 Test: blockdev write read size > 128k ...passed 00:13:00.961 Test: blockdev write read invalid size ...passed 00:13:00.961 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.961 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.961 Test: blockdev write read max offset ...passed 00:13:00.961 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.961 Test: blockdev writev readv 8 blocks ...passed 00:13:00.961 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.961 Test: blockdev writev readv block ...passed 00:13:00.961 Test: blockdev writev readv size > 128k ...passed 00:13:00.961 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.961 Test: blockdev comparev and writev ...passed 00:13:00.961 Test: blockdev nvme passthru rw ...passed 00:13:00.961 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.961 Test: blockdev nvme admin passthru ...passed 00:13:00.961 Test: blockdev copy ...passed 00:13:00.961 Suite: bdevio tests on: nvme1n1 00:13:00.961 Test: blockdev write read block ...passed 00:13:00.961 Test: blockdev write zeroes read block ...passed 00:13:00.961 Test: blockdev write zeroes read no split ...passed 00:13:00.961 Test: blockdev write zeroes read split ...passed 00:13:00.961 Test: blockdev write zeroes read split partial ...passed 00:13:00.961 Test: blockdev reset ...passed 00:13:00.961 Test: blockdev write read 8 blocks ...passed 00:13:00.961 Test: blockdev write read size > 128k ...passed 00:13:00.961 Test: blockdev write read invalid size ...passed 00:13:00.961 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.961 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.961 Test: blockdev write read max offset ...passed 00:13:00.961 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.961 Test: blockdev writev readv 8 blocks ...passed 00:13:00.961 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.961 Test: blockdev writev readv block ...passed 00:13:00.961 Test: blockdev writev readv size > 128k ...passed 00:13:00.961 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:00.961 Test: blockdev comparev and writev ...passed 00:13:00.961 Test: blockdev nvme passthru rw ...passed 00:13:00.961 Test: blockdev nvme passthru vendor specific ...passed 00:13:00.961 Test: blockdev nvme admin passthru ...passed 00:13:00.961 Test: blockdev copy ...passed 00:13:00.961 Suite: bdevio tests on: nvme0n1 00:13:00.961 Test: blockdev write read block ...passed 00:13:00.961 Test: blockdev write zeroes read block ...passed 00:13:00.961 Test: blockdev write zeroes read no split ...passed 00:13:00.961 Test: blockdev write zeroes read split ...passed 00:13:00.961 Test: blockdev write zeroes read split partial ...passed 00:13:00.961 Test: blockdev reset ...passed 00:13:00.961 Test: blockdev write read 8 blocks ...passed 00:13:00.961 Test: blockdev write read size > 128k ...passed 00:13:00.961 Test: blockdev write read invalid size ...passed 00:13:00.961 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:00.961 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:00.961 Test: blockdev write read max offset ...passed 00:13:00.961 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:00.961 Test: blockdev writev readv 8 blocks ...passed 00:13:00.961 Test: blockdev writev readv 30 x 1block ...passed 00:13:00.961 Test: blockdev writev readv block ...passed 00:13:00.961 Test: blockdev writev readv size > 128k ...passed 00:13:00.961 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:01.223 Test: blockdev comparev and writev ...passed 00:13:01.223 Test: blockdev nvme passthru rw ...passed 00:13:01.223 Test: blockdev nvme passthru vendor specific ...passed 00:13:01.223 Test: blockdev nvme admin passthru ...passed 00:13:01.223 Test: blockdev copy ...passed 00:13:01.223 00:13:01.223 Run Summary: Type Total Ran Passed Failed Inactive 00:13:01.223 suites 6 6 n/a 0 0 00:13:01.223 tests 138 138 138 0 0 00:13:01.223 asserts 780 780 780 0 n/a 00:13:01.223 00:13:01.223 Elapsed time = 1.250 seconds 00:13:01.223 0 00:13:01.223 11:27:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69423 00:13:01.223 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 69423 ']' 00:13:01.223 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 69423 00:13:01.223 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:01.223 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.223 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69423 00:13:01.223 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:01.223 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:01.223 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69423' 00:13:01.223 killing process with pid 69423 00:13:01.223 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 69423 00:13:01.223 11:27:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 69423 00:13:02.166 11:27:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:02.166 00:13:02.166 real 0m2.352s 00:13:02.166 user 0m5.684s 00:13:02.166 sys 0m0.387s 00:13:02.166 11:27:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.166 ************************************ 00:13:02.166 END TEST bdev_bounds 00:13:02.166 ************************************ 00:13:02.166 11:27:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:02.166 11:27:47 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:02.166 11:27:47 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:02.166 11:27:47 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.166 11:27:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.166 ************************************ 00:13:02.166 START TEST bdev_nbd 00:13:02.166 ************************************ 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69477 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69477 /var/tmp/spdk-nbd.sock 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 69477 ']' 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:02.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.166 11:27:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:02.166 [2024-10-27 11:27:47.226493] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:13:02.166 [2024-10-27 11:27:47.226633] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.166 [2024-10-27 11:27:47.392348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.427 [2024-10-27 11:27:47.513432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:03.000 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.262 1+0 records in 00:13:03.262 1+0 records out 00:13:03.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108599 s, 3.8 MB/s 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:03.262 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.524 1+0 records in 00:13:03.524 1+0 records out 00:13:03.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109403 s, 3.7 MB/s 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:03.524 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.786 1+0 records in 00:13:03.786 1+0 records out 00:13:03.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000810981 s, 5.1 MB/s 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:03.786 11:27:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.048 1+0 records in 00:13:04.048 1+0 records out 00:13:04.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000892029 s, 4.6 MB/s 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:04.048 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.311 1+0 records in 00:13:04.311 1+0 records out 00:13:04.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127138 s, 3.2 MB/s 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:04.311 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.571 1+0 records in 00:13:04.571 1+0 records out 00:13:04.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799125 s, 5.1 MB/s 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:04.571 { 00:13:04.571 "nbd_device": "/dev/nbd0", 00:13:04.571 "bdev_name": "nvme0n1" 00:13:04.571 }, 00:13:04.571 { 00:13:04.571 "nbd_device": "/dev/nbd1", 00:13:04.571 "bdev_name": "nvme1n1" 00:13:04.571 }, 00:13:04.571 { 00:13:04.571 "nbd_device": "/dev/nbd2", 00:13:04.571 "bdev_name": "nvme2n1" 00:13:04.571 }, 00:13:04.571 { 00:13:04.571 "nbd_device": "/dev/nbd3", 00:13:04.571 "bdev_name": "nvme2n2" 00:13:04.571 }, 00:13:04.571 { 00:13:04.571 "nbd_device": "/dev/nbd4", 00:13:04.571 "bdev_name": "nvme2n3" 00:13:04.571 }, 00:13:04.571 { 00:13:04.571 "nbd_device": "/dev/nbd5", 00:13:04.571 "bdev_name": "nvme3n1" 00:13:04.571 } 00:13:04.571 ]' 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:04.571 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:04.571 { 00:13:04.571 "nbd_device": "/dev/nbd0", 00:13:04.571 "bdev_name": "nvme0n1" 00:13:04.571 }, 00:13:04.572 { 00:13:04.572 "nbd_device": "/dev/nbd1", 00:13:04.572 "bdev_name": "nvme1n1" 00:13:04.572 }, 00:13:04.572 { 00:13:04.572 "nbd_device": "/dev/nbd2", 00:13:04.572 "bdev_name": "nvme2n1" 00:13:04.572 }, 00:13:04.572 { 00:13:04.572 "nbd_device": "/dev/nbd3", 00:13:04.572 "bdev_name": "nvme2n2" 00:13:04.572 }, 00:13:04.572 { 00:13:04.572 "nbd_device": "/dev/nbd4", 00:13:04.572 "bdev_name": "nvme2n3" 00:13:04.572 }, 00:13:04.572 { 00:13:04.572 "nbd_device": "/dev/nbd5", 00:13:04.572 "bdev_name": "nvme3n1" 00:13:04.572 } 00:13:04.572 ]' 00:13:04.572 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:04.832 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:04.832 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.832 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:04.832 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.832 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:04.832 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.832 11:27:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:04.832 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.832 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.832 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.832 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.832 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.832 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.832 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.832 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.832 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.832 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:05.094 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:05.094 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:05.094 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:05.094 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.094 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.094 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:05.094 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.094 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.094 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.094 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:05.356 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:05.356 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:05.356 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:05.356 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.356 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.356 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:05.356 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.356 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.356 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.356 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:05.617 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:05.617 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:05.617 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:05.617 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.617 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.617 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:05.617 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.617 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.617 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.617 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:05.878 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:05.878 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:05.878 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:05.878 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.878 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.878 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:05.878 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.878 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.878 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.878 11:27:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.138 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:06.400 /dev/nbd0 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.400 1+0 records in 00:13:06.400 1+0 records out 00:13:06.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368209 s, 11.1 MB/s 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.400 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:06.663 /dev/nbd1 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.663 1+0 records in 00:13:06.663 1+0 records out 00:13:06.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459933 s, 8.9 MB/s 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.663 11:27:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:06.924 /dev/nbd10 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.924 1+0 records in 00:13:06.924 1+0 records out 00:13:06.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345074 s, 11.9 MB/s 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:06.924 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:07.186 /dev/nbd11 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.186 1+0 records in 00:13:07.186 1+0 records out 00:13:07.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571426 s, 7.2 MB/s 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:07.186 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:07.447 /dev/nbd12 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.447 1+0 records in 00:13:07.447 1+0 records out 00:13:07.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000965178 s, 4.2 MB/s 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.447 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:07.448 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.448 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:07.448 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:07.448 /dev/nbd13 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.709 1+0 records in 00:13:07.709 1+0 records out 00:13:07.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00198268 s, 2.1 MB/s 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd0", 00:13:07.709 "bdev_name": "nvme0n1" 00:13:07.709 }, 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd1", 00:13:07.709 "bdev_name": "nvme1n1" 00:13:07.709 }, 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd10", 00:13:07.709 "bdev_name": "nvme2n1" 00:13:07.709 }, 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd11", 00:13:07.709 "bdev_name": "nvme2n2" 00:13:07.709 }, 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd12", 00:13:07.709 "bdev_name": "nvme2n3" 00:13:07.709 }, 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd13", 00:13:07.709 "bdev_name": "nvme3n1" 00:13:07.709 } 00:13:07.709 ]' 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd0", 00:13:07.709 "bdev_name": "nvme0n1" 00:13:07.709 }, 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd1", 00:13:07.709 "bdev_name": "nvme1n1" 00:13:07.709 }, 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd10", 00:13:07.709 "bdev_name": "nvme2n1" 00:13:07.709 }, 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd11", 00:13:07.709 "bdev_name": "nvme2n2" 00:13:07.709 }, 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd12", 00:13:07.709 "bdev_name": "nvme2n3" 00:13:07.709 }, 00:13:07.709 { 00:13:07.709 "nbd_device": "/dev/nbd13", 00:13:07.709 "bdev_name": "nvme3n1" 00:13:07.709 } 00:13:07.709 ]' 00:13:07.709 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:07.970 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:07.970 /dev/nbd1 00:13:07.970 /dev/nbd10 00:13:07.970 /dev/nbd11 00:13:07.970 /dev/nbd12 00:13:07.970 /dev/nbd13' 00:13:07.970 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:07.970 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:07.970 /dev/nbd1 00:13:07.970 /dev/nbd10 00:13:07.970 /dev/nbd11 00:13:07.970 /dev/nbd12 00:13:07.970 /dev/nbd13' 00:13:07.970 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:07.970 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:07.970 11:27:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:07.970 256+0 records in 00:13:07.970 256+0 records out 00:13:07.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00829709 s, 126 MB/s 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:07.970 256+0 records in 00:13:07.970 256+0 records out 00:13:07.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.219269 s, 4.8 MB/s 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:07.970 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:08.231 256+0 records in 00:13:08.231 256+0 records out 00:13:08.231 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247411 s, 4.2 MB/s 00:13:08.231 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.231 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:08.493 256+0 records in 00:13:08.493 256+0 records out 00:13:08.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18487 s, 5.7 MB/s 00:13:08.493 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.493 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:08.755 256+0 records in 00:13:08.755 256+0 records out 00:13:08.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225927 s, 4.6 MB/s 00:13:08.755 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:08.755 11:27:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:09.016 256+0 records in 00:13:09.016 256+0 records out 00:13:09.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236175 s, 4.4 MB/s 00:13:09.016 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:09.016 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:09.277 256+0 records in 00:13:09.277 256+0 records out 00:13:09.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.197346 s, 5.3 MB/s 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.277 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:09.539 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.800 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.800 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:09.800 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.800 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.800 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.800 11:27:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:09.800 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:09.800 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:09.800 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:09.800 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.800 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.800 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:09.800 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:09.800 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.800 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.800 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:10.061 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:10.061 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:10.061 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:10.061 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.061 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.061 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:10.061 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.061 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.061 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.061 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:10.322 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:10.322 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:10.322 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:10.322 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.322 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.322 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:10.322 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.322 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.322 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.322 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:10.584 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:10.845 11:27:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:10.845 malloc_lvol_verify 00:13:10.845 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:11.106 451abe5e-6a69-40a3-a28c-6f46129bb38d 00:13:11.106 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:11.365 656a460c-f9f2-406f-b230-16b26dd76c9c 00:13:11.365 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:11.625 /dev/nbd0 00:13:11.625 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:11.625 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:11.625 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:11.625 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:11.625 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:11.625 mke2fs 1.47.0 (5-Feb-2023) 00:13:11.625 Discarding device blocks: 0/4096 done 00:13:11.625 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:11.625 00:13:11.625 Allocating group tables: 0/1 done 00:13:11.626 Writing inode tables: 0/1 done 00:13:11.626 Creating journal (1024 blocks): done 00:13:11.626 Writing superblocks and filesystem accounting information: 0/1 done 00:13:11.626 00:13:11.626 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:11.626 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:11.626 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:11.626 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.626 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:11.626 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.626 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69477 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 69477 ']' 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 69477 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69477 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69477' 00:13:11.887 killing process with pid 69477 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 69477 00:13:11.887 11:27:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 69477 00:13:12.458 11:27:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:12.458 00:13:12.458 real 0m10.389s 00:13:12.458 user 0m14.071s 00:13:12.458 sys 0m3.577s 00:13:12.458 11:27:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.458 ************************************ 00:13:12.458 END TEST bdev_nbd 00:13:12.458 11:27:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:12.458 ************************************ 00:13:12.458 11:27:57 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:12.458 11:27:57 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:12.458 11:27:57 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:12.458 11:27:57 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:12.458 11:27:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:12.458 11:27:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.458 11:27:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.458 ************************************ 00:13:12.458 START TEST bdev_fio 00:13:12.458 ************************************ 00:13:12.458 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:12.458 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:12.458 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:12.458 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:12.458 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:12.458 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:12.459 ************************************ 00:13:12.459 START TEST bdev_fio_rw_verify 00:13:12.459 ************************************ 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:12.459 11:27:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:12.720 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.720 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.720 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.720 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.720 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.720 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:12.720 fio-3.35 00:13:12.720 Starting 6 threads 00:13:24.955 00:13:24.955 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69876: Sun Oct 27 11:28:08 2024 00:13:24.955 read: IOPS=18.1k, BW=70.8MiB/s (74.2MB/s)(708MiB/10002msec) 00:13:24.955 slat (usec): min=2, max=2126, avg= 5.49, stdev=18.92 00:13:24.955 clat (usec): min=55, max=7912, avg=1061.92, stdev=833.71 00:13:24.955 lat (usec): min=59, max=7916, avg=1067.41, stdev=834.47 00:13:24.955 clat percentiles (usec): 00:13:24.955 | 50.000th=[ 824], 99.000th=[ 3687], 99.900th=[ 5211], 99.990th=[ 6783], 00:13:24.955 | 99.999th=[ 7898] 00:13:24.955 write: IOPS=18.5k, BW=72.2MiB/s (75.7MB/s)(722MiB/10002msec); 0 zone resets 00:13:24.955 slat (usec): min=5, max=6199, avg=36.93, stdev=140.22 00:13:24.955 clat (usec): min=61, max=10373, avg=1260.35, stdev=943.21 00:13:24.955 lat (usec): min=75, max=10411, avg=1297.29, stdev=960.22 00:13:24.955 clat percentiles (usec): 00:13:24.955 | 50.000th=[ 1012], 99.000th=[ 4228], 99.900th=[ 5800], 99.990th=[ 7767], 00:13:24.955 | 99.999th=[10421] 00:13:24.955 bw ( KiB/s): min=47356, max=141499, per=100.00%, avg=74755.53, stdev=4314.53, samples=114 00:13:24.955 iops : min=11836, max=35372, avg=18688.05, stdev=1078.60, samples=114 00:13:24.955 lat (usec) : 100=0.06%, 250=8.04%, 500=20.39%, 750=14.73%, 1000=10.00% 00:13:24.955 lat (msec) : 2=30.41%, 4=15.36%, 10=1.00%, 20=0.01% 00:13:24.955 cpu : usr=42.27%, sys=32.46%, ctx=6156, majf=0, minf=17246 00:13:24.955 IO depths : 1=11.5%, 2=23.9%, 4=51.0%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.955 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.955 issued rwts: total=181198,184831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.955 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:24.955 00:13:24.955 Run status group 0 (all jobs): 00:13:24.955 READ: bw=70.8MiB/s (74.2MB/s), 70.8MiB/s-70.8MiB/s (74.2MB/s-74.2MB/s), io=708MiB (742MB), run=10002-10002msec 00:13:24.955 WRITE: bw=72.2MiB/s (75.7MB/s), 72.2MiB/s-72.2MiB/s (75.7MB/s-75.7MB/s), io=722MiB (757MB), run=10002-10002msec 00:13:24.955 ----------------------------------------------------- 00:13:24.955 Suppressions used: 00:13:24.955 count bytes template 00:13:24.955 6 48 /usr/src/fio/parse.c 00:13:24.955 3507 336672 /usr/src/fio/iolog.c 00:13:24.955 1 8 libtcmalloc_minimal.so 00:13:24.955 1 904 libcrypto.so 00:13:24.955 ----------------------------------------------------- 00:13:24.956 00:13:24.956 00:13:24.956 real 0m11.774s 00:13:24.956 user 0m26.733s 00:13:24.956 sys 0m19.755s 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:24.956 ************************************ 00:13:24.956 END TEST bdev_fio_rw_verify 00:13:24.956 ************************************ 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "be99615c-6711-43d3-91a9-6da611ef0066"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "be99615c-6711-43d3-91a9-6da611ef0066",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d25a5280-f5bd-4c2e-a9af-6411f036f83e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d25a5280-f5bd-4c2e-a9af-6411f036f83e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7ff13d2c-ace4-42ed-a015-e462cf45171b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7ff13d2c-ace4-42ed-a015-e462cf45171b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "f5baae82-1297-4f1d-9caa-757e8db9b19e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f5baae82-1297-4f1d-9caa-757e8db9b19e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "0fc66520-1d5f-4662-9bb9-7c421dc480c1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0fc66520-1d5f-4662-9bb9-7c421dc480c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "6e3bda89-954e-45c0-9647-6f15948dd513"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6e3bda89-954e-45c0-9647-6f15948dd513",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:24.956 /home/vagrant/spdk_repo/spdk 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:24.956 00:13:24.956 real 0m11.930s 00:13:24.956 user 0m26.813s 00:13:24.956 sys 0m19.810s 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.956 11:28:09 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:24.956 ************************************ 00:13:24.956 END TEST bdev_fio 00:13:24.956 ************************************ 00:13:24.956 11:28:09 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:24.956 11:28:09 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:24.956 11:28:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:24.956 11:28:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.956 11:28:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.956 ************************************ 00:13:24.956 START TEST bdev_verify 00:13:24.956 ************************************ 00:13:24.956 11:28:09 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:24.956 [2024-10-27 11:28:09.651573] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:13:24.956 [2024-10-27 11:28:09.651690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70050 ] 00:13:24.956 [2024-10-27 11:28:09.808325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:24.956 [2024-10-27 11:28:09.906722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.956 [2024-10-27 11:28:09.906800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.218 Running I/O for 5 seconds... 00:13:27.641 20832.00 IOPS, 81.38 MiB/s [2024-10-27T11:28:13.492Z] 21920.00 IOPS, 85.62 MiB/s [2024-10-27T11:28:14.872Z] 22368.00 IOPS, 87.38 MiB/s [2024-10-27T11:28:15.445Z] 22320.00 IOPS, 87.19 MiB/s [2024-10-27T11:28:15.445Z] 22508.80 IOPS, 87.92 MiB/s 00:13:30.164 Latency(us) 00:13:30.164 [2024-10-27T11:28:15.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.164 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:30.164 Verification LBA range: start 0x0 length 0xa0000 00:13:30.164 nvme0n1 : 5.09 1860.91 7.27 0.00 0.00 68637.17 11846.89 71787.13 00:13:30.164 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:30.164 Verification LBA range: start 0xa0000 length 0xa0000 00:13:30.164 nvme0n1 : 5.03 1704.74 6.66 0.00 0.00 74918.29 6956.90 79449.80 00:13:30.164 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:30.164 Verification LBA range: start 0x0 length 0xbd0bd 00:13:30.164 nvme1n1 : 5.08 2178.41 8.51 0.00 0.00 58383.77 4688.34 68157.44 00:13:30.164 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:30.164 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:30.164 nvme1n1 : 5.07 2068.92 8.08 0.00 0.00 61572.84 6956.90 60091.47 00:13:30.164 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:30.164 Verification LBA range: start 0x0 length 0x80000 00:13:30.164 nvme2n1 : 5.09 1887.50 7.37 0.00 0.00 67355.70 8065.97 63721.16 00:13:30.164 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:30.164 Verification LBA range: start 0x80000 length 0x80000 00:13:30.164 nvme2n1 : 5.06 1745.52 6.82 0.00 0.00 72836.26 14720.39 71787.13 00:13:30.164 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:30.165 Verification LBA range: start 0x0 length 0x80000 00:13:30.165 nvme2n2 : 5.08 1864.63 7.28 0.00 0.00 67974.25 8065.97 68157.44 00:13:30.165 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:30.165 Verification LBA range: start 0x80000 length 0x80000 00:13:30.165 nvme2n2 : 5.08 1737.35 6.79 0.00 0.00 72976.03 8065.97 71787.13 00:13:30.165 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:30.165 Verification LBA range: start 0x0 length 0x80000 00:13:30.165 nvme2n3 : 5.09 1885.20 7.36 0.00 0.00 67094.37 8318.03 73803.62 00:13:30.165 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:30.165 Verification LBA range: start 0x80000 length 0x80000 00:13:30.165 nvme2n3 : 5.07 1718.26 6.71 0.00 0.00 73627.48 9527.93 66140.95 00:13:30.165 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:30.165 Verification LBA range: start 0x0 length 0x20000 00:13:30.165 nvme3n1 : 5.09 1859.50 7.26 0.00 0.00 67865.99 6301.54 68964.04 00:13:30.165 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:30.165 Verification LBA range: start 0x20000 length 0x20000 00:13:30.165 nvme3n1 : 5.09 1711.22 6.68 0.00 0.00 73756.88 6276.33 74206.92 00:13:30.165 [2024-10-27T11:28:15.446Z] =================================================================================================================== 00:13:30.165 [2024-10-27T11:28:15.446Z] Total : 22222.15 86.81 0.00 0.00 68538.81 4688.34 79449.80 00:13:31.106 00:13:31.106 real 0m6.665s 00:13:31.106 user 0m11.111s 00:13:31.106 sys 0m1.176s 00:13:31.106 11:28:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.106 ************************************ 00:13:31.106 END TEST bdev_verify 00:13:31.106 ************************************ 00:13:31.106 11:28:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:31.106 11:28:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:31.106 11:28:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:31.106 11:28:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.106 11:28:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.106 ************************************ 00:13:31.106 START TEST bdev_verify_big_io 00:13:31.106 ************************************ 00:13:31.106 11:28:16 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:31.367 [2024-10-27 11:28:16.397030] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:13:31.367 [2024-10-27 11:28:16.397716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70151 ] 00:13:31.367 [2024-10-27 11:28:16.563374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:31.628 [2024-10-27 11:28:16.685405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.628 [2024-10-27 11:28:16.685435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.197 Running I/O for 5 seconds... 00:13:38.308 1392.00 IOPS, 87.00 MiB/s [2024-10-27T11:28:23.589Z] 3031.00 IOPS, 189.44 MiB/s 00:13:38.308 Latency(us) 00:13:38.308 [2024-10-27T11:28:23.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.308 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0x0 length 0xa000 00:13:38.308 nvme0n1 : 6.06 84.55 5.28 0.00 0.00 1482844.95 145994.04 1632552.17 00:13:38.308 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0xa000 length 0xa000 00:13:38.308 nvme0n1 : 5.76 133.25 8.33 0.00 0.00 915227.57 10132.87 1096971.82 00:13:38.308 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0x0 length 0xbd0b 00:13:38.308 nvme1n1 : 6.13 118.89 7.43 0.00 0.00 995839.89 64124.46 1419610.58 00:13:38.308 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:38.308 nvme1n1 : 5.77 163.70 10.23 0.00 0.00 715569.60 47387.57 793691.37 00:13:38.308 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0x0 length 0x8000 00:13:38.308 nvme2n1 : 6.16 67.58 4.22 0.00 0.00 1675495.70 146800.64 1871304.86 00:13:38.308 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0x8000 length 0x8000 00:13:38.308 nvme2n1 : 5.98 90.96 5.69 0.00 0.00 1244338.11 110503.78 2723071.21 00:13:38.308 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0x0 length 0x8000 00:13:38.308 nvme2n2 : 6.16 87.03 5.44 0.00 0.00 1230494.27 25609.45 2452054.65 00:13:38.308 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0x8000 length 0x8000 00:13:38.308 nvme2n2 : 6.04 153.55 9.60 0.00 0.00 734894.11 57268.38 1032444.06 00:13:38.308 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0x0 length 0x8000 00:13:38.308 nvme2n3 : 6.19 92.51 5.78 0.00 0.00 1105818.25 5469.74 3123143.29 00:13:38.308 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0x8000 length 0x8000 00:13:38.308 nvme2n3 : 6.05 148.06 9.25 0.00 0.00 741604.09 9779.99 793691.37 00:13:38.308 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0x0 length 0x2000 00:13:38.308 nvme3n1 : 6.34 151.43 9.46 0.00 0.00 646841.66 1127.98 3278009.90 00:13:38.308 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:38.308 Verification LBA range: start 0x2000 length 0x2000 00:13:38.308 nvme3n1 : 6.06 124.16 7.76 0.00 0.00 855901.17 8721.33 2516582.40 00:13:38.308 [2024-10-27T11:28:23.589Z] =================================================================================================================== 00:13:38.308 [2024-10-27T11:28:23.589Z] Total : 1415.66 88.48 0.00 0.00 951360.83 1127.98 3278009.90 00:13:39.252 00:13:39.252 real 0m8.067s 00:13:39.252 user 0m14.906s 00:13:39.252 sys 0m0.411s 00:13:39.252 11:28:24 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.252 11:28:24 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.252 ************************************ 00:13:39.252 END TEST bdev_verify_big_io 00:13:39.252 ************************************ 00:13:39.252 11:28:24 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:39.252 11:28:24 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:39.252 11:28:24 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.252 11:28:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.252 ************************************ 00:13:39.252 START TEST bdev_write_zeroes 00:13:39.252 ************************************ 00:13:39.252 11:28:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:39.252 [2024-10-27 11:28:24.511901] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:13:39.252 [2024-10-27 11:28:24.512011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70262 ] 00:13:39.513 [2024-10-27 11:28:24.669831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.513 [2024-10-27 11:28:24.758251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.087 Running I/O for 1 seconds... 00:13:41.031 79968.00 IOPS, 312.38 MiB/s 00:13:41.031 Latency(us) 00:13:41.031 [2024-10-27T11:28:26.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.031 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:41.031 nvme0n1 : 1.01 13024.33 50.88 0.00 0.00 9816.52 5646.18 20064.10 00:13:41.031 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:41.031 nvme1n1 : 1.02 14610.80 57.07 0.00 0.00 8745.00 3881.75 19055.85 00:13:41.031 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:41.031 nvme2n1 : 1.02 13000.46 50.78 0.00 0.00 9788.39 5721.80 19358.33 00:13:41.031 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:41.031 nvme2n2 : 1.03 12949.43 50.58 0.00 0.00 9797.20 4360.66 22181.42 00:13:41.031 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:41.031 nvme2n3 : 1.03 12932.11 50.52 0.00 0.00 9803.87 4461.49 22181.42 00:13:41.031 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:41.031 nvme3n1 : 1.02 12905.13 50.41 0.00 0.00 9817.20 4537.11 22080.59 00:13:41.031 [2024-10-27T11:28:26.312Z] =================================================================================================================== 00:13:41.031 [2024-10-27T11:28:26.312Z] Total : 79422.26 310.24 0.00 0.00 9609.70 3881.75 22181.42 00:13:41.975 00:13:41.975 real 0m2.456s 00:13:41.975 user 0m1.893s 00:13:41.975 sys 0m0.397s 00:13:41.975 11:28:26 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:41.975 ************************************ 00:13:41.975 END TEST bdev_write_zeroes 00:13:41.975 ************************************ 00:13:41.975 11:28:26 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:41.975 11:28:26 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:41.975 11:28:26 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:41.975 11:28:26 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:41.975 11:28:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.975 ************************************ 00:13:41.975 START TEST bdev_json_nonenclosed 00:13:41.975 ************************************ 00:13:41.975 11:28:26 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:41.975 [2024-10-27 11:28:27.041845] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:13:41.975 [2024-10-27 11:28:27.041980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70308 ] 00:13:41.975 [2024-10-27 11:28:27.206566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.236 [2024-10-27 11:28:27.324475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.236 [2024-10-27 11:28:27.324563] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:42.236 [2024-10-27 11:28:27.324582] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:42.236 [2024-10-27 11:28:27.324592] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:42.236 00:13:42.236 real 0m0.539s 00:13:42.236 user 0m0.328s 00:13:42.236 sys 0m0.105s 00:13:42.236 11:28:27 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.236 ************************************ 00:13:42.236 END TEST bdev_json_nonenclosed 00:13:42.236 ************************************ 00:13:42.236 11:28:27 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:42.497 11:28:27 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:42.497 11:28:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:42.497 11:28:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.497 11:28:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.497 ************************************ 00:13:42.497 START TEST bdev_json_nonarray 00:13:42.497 ************************************ 00:13:42.497 11:28:27 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:42.497 [2024-10-27 11:28:27.646098] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:13:42.497 [2024-10-27 11:28:27.646234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70335 ] 00:13:42.758 [2024-10-27 11:28:27.810734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.758 [2024-10-27 11:28:27.929500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.758 [2024-10-27 11:28:27.929601] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:42.758 [2024-10-27 11:28:27.929620] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:42.758 [2024-10-27 11:28:27.929631] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:43.019 00:13:43.019 real 0m0.543s 00:13:43.019 user 0m0.329s 00:13:43.019 sys 0m0.108s 00:13:43.019 11:28:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.019 11:28:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:43.019 ************************************ 00:13:43.019 END TEST bdev_json_nonarray 00:13:43.019 ************************************ 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:43.019 11:28:28 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:43.590 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:47.805 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.805 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.805 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.805 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:47.805 00:13:47.805 real 0m57.620s 00:13:47.805 user 1m25.155s 00:13:47.805 sys 0m34.191s 00:13:47.805 11:28:32 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.805 ************************************ 00:13:47.805 END TEST blockdev_xnvme 00:13:47.805 ************************************ 00:13:47.805 11:28:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.805 11:28:32 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:47.805 11:28:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:47.805 11:28:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.805 11:28:32 -- common/autotest_common.sh@10 -- # set +x 00:13:47.805 ************************************ 00:13:47.805 START TEST ublk 00:13:47.805 ************************************ 00:13:47.805 11:28:32 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:47.805 * Looking for test storage... 00:13:47.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:47.805 11:28:32 ublk -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:47.805 11:28:32 ublk -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:47.805 11:28:32 ublk -- common/autotest_common.sh@1689 -- # lcov --version 00:13:47.805 11:28:33 ublk -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:47.805 11:28:33 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.805 11:28:33 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.805 11:28:33 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.805 11:28:33 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.805 11:28:33 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.805 11:28:33 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.805 11:28:33 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.805 11:28:33 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.805 11:28:33 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.805 11:28:33 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.805 11:28:33 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.805 11:28:33 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:47.805 11:28:33 ublk -- scripts/common.sh@345 -- # : 1 00:13:47.805 11:28:33 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.805 11:28:33 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.805 11:28:33 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:47.805 11:28:33 ublk -- scripts/common.sh@353 -- # local d=1 00:13:47.805 11:28:33 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.805 11:28:33 ublk -- scripts/common.sh@355 -- # echo 1 00:13:47.805 11:28:33 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.805 11:28:33 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:47.805 11:28:33 ublk -- scripts/common.sh@353 -- # local d=2 00:13:47.805 11:28:33 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.805 11:28:33 ublk -- scripts/common.sh@355 -- # echo 2 00:13:47.805 11:28:33 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.805 11:28:33 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.805 11:28:33 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.805 11:28:33 ublk -- scripts/common.sh@368 -- # return 0 00:13:47.805 11:28:33 ublk -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.805 11:28:33 ublk -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:47.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.805 --rc genhtml_branch_coverage=1 00:13:47.805 --rc genhtml_function_coverage=1 00:13:47.805 --rc genhtml_legend=1 00:13:47.805 --rc geninfo_all_blocks=1 00:13:47.805 --rc geninfo_unexecuted_blocks=1 00:13:47.805 00:13:47.805 ' 00:13:47.805 11:28:33 ublk -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:47.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.805 --rc genhtml_branch_coverage=1 00:13:47.805 --rc genhtml_function_coverage=1 00:13:47.805 --rc genhtml_legend=1 00:13:47.805 --rc geninfo_all_blocks=1 00:13:47.805 --rc geninfo_unexecuted_blocks=1 00:13:47.805 00:13:47.805 ' 00:13:47.805 11:28:33 ublk -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:47.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.805 --rc genhtml_branch_coverage=1 00:13:47.805 --rc genhtml_function_coverage=1 00:13:47.805 --rc genhtml_legend=1 00:13:47.805 --rc geninfo_all_blocks=1 00:13:47.805 --rc geninfo_unexecuted_blocks=1 00:13:47.805 00:13:47.805 ' 00:13:47.805 11:28:33 ublk -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:47.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.805 --rc genhtml_branch_coverage=1 00:13:47.805 --rc genhtml_function_coverage=1 00:13:47.805 --rc genhtml_legend=1 00:13:47.805 --rc geninfo_all_blocks=1 00:13:47.805 --rc geninfo_unexecuted_blocks=1 00:13:47.805 00:13:47.805 ' 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:47.805 11:28:33 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:47.805 11:28:33 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:47.805 11:28:33 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:47.805 11:28:33 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:47.805 11:28:33 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:47.805 11:28:33 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:47.805 11:28:33 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:47.805 11:28:33 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:47.805 11:28:33 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:47.806 11:28:33 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:47.806 11:28:33 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.806 11:28:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:47.806 ************************************ 00:13:47.806 START TEST test_save_ublk_config 00:13:47.806 ************************************ 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70624 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70624 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 70624 ']' 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:47.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:47.806 11:28:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:48.067 [2024-10-27 11:28:33.151563] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:13:48.067 [2024-10-27 11:28:33.151715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70624 ] 00:13:48.067 [2024-10-27 11:28:33.312521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.328 [2024-10-27 11:28:33.432739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.901 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:48.901 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:48.901 11:28:34 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:48.901 11:28:34 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:48.901 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.901 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:48.901 [2024-10-27 11:28:34.169328] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:48.901 [2024-10-27 11:28:34.170260] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:49.163 malloc0 00:13:49.163 [2024-10-27 11:28:34.241460] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:49.163 [2024-10-27 11:28:34.241561] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:49.163 [2024-10-27 11:28:34.241572] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:49.163 [2024-10-27 11:28:34.241580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:49.163 [2024-10-27 11:28:34.250437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:49.163 [2024-10-27 11:28:34.250468] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:49.163 [2024-10-27 11:28:34.257341] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:49.163 [2024-10-27 11:28:34.257466] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:49.163 [2024-10-27 11:28:34.274332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:49.163 0 00:13:49.163 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.163 11:28:34 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:49.163 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.163 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:49.425 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.425 11:28:34 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:49.425 "subsystems": [ 00:13:49.425 { 00:13:49.425 "subsystem": "fsdev", 00:13:49.425 "config": [ 00:13:49.425 { 00:13:49.425 "method": "fsdev_set_opts", 00:13:49.425 "params": { 00:13:49.425 "fsdev_io_pool_size": 65535, 00:13:49.425 "fsdev_io_cache_size": 256 00:13:49.425 } 00:13:49.425 } 00:13:49.425 ] 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "keyring", 00:13:49.425 "config": [] 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "iobuf", 00:13:49.425 "config": [ 00:13:49.425 { 00:13:49.425 "method": "iobuf_set_options", 00:13:49.425 "params": { 00:13:49.425 "small_pool_count": 8192, 00:13:49.425 "large_pool_count": 1024, 00:13:49.425 "small_bufsize": 8192, 00:13:49.425 "large_bufsize": 135168, 00:13:49.425 "enable_numa": false 00:13:49.425 } 00:13:49.425 } 00:13:49.425 ] 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "sock", 00:13:49.425 "config": [ 00:13:49.425 { 00:13:49.425 "method": "sock_set_default_impl", 00:13:49.425 "params": { 00:13:49.425 "impl_name": "posix" 00:13:49.425 } 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "method": "sock_impl_set_options", 00:13:49.425 "params": { 00:13:49.425 "impl_name": "ssl", 00:13:49.425 "recv_buf_size": 4096, 00:13:49.425 "send_buf_size": 4096, 00:13:49.425 "enable_recv_pipe": true, 00:13:49.425 "enable_quickack": false, 00:13:49.425 "enable_placement_id": 0, 00:13:49.425 "enable_zerocopy_send_server": true, 00:13:49.425 "enable_zerocopy_send_client": false, 00:13:49.425 "zerocopy_threshold": 0, 00:13:49.425 "tls_version": 0, 00:13:49.425 "enable_ktls": false 00:13:49.425 } 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "method": "sock_impl_set_options", 00:13:49.425 "params": { 00:13:49.425 "impl_name": "posix", 00:13:49.425 "recv_buf_size": 2097152, 00:13:49.425 "send_buf_size": 2097152, 00:13:49.425 "enable_recv_pipe": true, 00:13:49.425 "enable_quickack": false, 00:13:49.425 "enable_placement_id": 0, 00:13:49.425 "enable_zerocopy_send_server": true, 00:13:49.425 "enable_zerocopy_send_client": false, 00:13:49.425 "zerocopy_threshold": 0, 00:13:49.425 "tls_version": 0, 00:13:49.425 "enable_ktls": false 00:13:49.425 } 00:13:49.425 } 00:13:49.425 ] 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "vmd", 00:13:49.425 "config": [] 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "accel", 00:13:49.425 "config": [ 00:13:49.425 { 00:13:49.425 "method": "accel_set_options", 00:13:49.425 "params": { 00:13:49.425 "small_cache_size": 128, 00:13:49.425 "large_cache_size": 16, 00:13:49.425 "task_count": 2048, 00:13:49.425 "sequence_count": 2048, 00:13:49.425 "buf_count": 2048 00:13:49.425 } 00:13:49.425 } 00:13:49.425 ] 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "bdev", 00:13:49.425 "config": [ 00:13:49.425 { 00:13:49.425 "method": "bdev_set_options", 00:13:49.425 "params": { 00:13:49.425 "bdev_io_pool_size": 65535, 00:13:49.425 "bdev_io_cache_size": 256, 00:13:49.425 "bdev_auto_examine": true, 00:13:49.425 "iobuf_small_cache_size": 128, 00:13:49.425 "iobuf_large_cache_size": 16 00:13:49.425 } 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "method": "bdev_raid_set_options", 00:13:49.425 "params": { 00:13:49.425 "process_window_size_kb": 1024, 00:13:49.425 "process_max_bandwidth_mb_sec": 0 00:13:49.425 } 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "method": "bdev_iscsi_set_options", 00:13:49.425 "params": { 00:13:49.425 "timeout_sec": 30 00:13:49.425 } 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "method": "bdev_nvme_set_options", 00:13:49.425 "params": { 00:13:49.425 "action_on_timeout": "none", 00:13:49.425 "timeout_us": 0, 00:13:49.425 "timeout_admin_us": 0, 00:13:49.425 "keep_alive_timeout_ms": 10000, 00:13:49.425 "arbitration_burst": 0, 00:13:49.425 "low_priority_weight": 0, 00:13:49.425 "medium_priority_weight": 0, 00:13:49.425 "high_priority_weight": 0, 00:13:49.425 "nvme_adminq_poll_period_us": 10000, 00:13:49.425 "nvme_ioq_poll_period_us": 0, 00:13:49.425 "io_queue_requests": 0, 00:13:49.425 "delay_cmd_submit": true, 00:13:49.425 "transport_retry_count": 4, 00:13:49.425 "bdev_retry_count": 3, 00:13:49.425 "transport_ack_timeout": 0, 00:13:49.425 "ctrlr_loss_timeout_sec": 0, 00:13:49.425 "reconnect_delay_sec": 0, 00:13:49.425 "fast_io_fail_timeout_sec": 0, 00:13:49.425 "disable_auto_failback": false, 00:13:49.425 "generate_uuids": false, 00:13:49.425 "transport_tos": 0, 00:13:49.425 "nvme_error_stat": false, 00:13:49.425 "rdma_srq_size": 0, 00:13:49.425 "io_path_stat": false, 00:13:49.425 "allow_accel_sequence": false, 00:13:49.425 "rdma_max_cq_size": 0, 00:13:49.425 "rdma_cm_event_timeout_ms": 0, 00:13:49.425 "dhchap_digests": [ 00:13:49.425 "sha256", 00:13:49.425 "sha384", 00:13:49.425 "sha512" 00:13:49.425 ], 00:13:49.425 "dhchap_dhgroups": [ 00:13:49.425 "null", 00:13:49.425 "ffdhe2048", 00:13:49.425 "ffdhe3072", 00:13:49.425 "ffdhe4096", 00:13:49.425 "ffdhe6144", 00:13:49.425 "ffdhe8192" 00:13:49.425 ] 00:13:49.425 } 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "method": "bdev_nvme_set_hotplug", 00:13:49.425 "params": { 00:13:49.425 "period_us": 100000, 00:13:49.425 "enable": false 00:13:49.425 } 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "method": "bdev_malloc_create", 00:13:49.425 "params": { 00:13:49.425 "name": "malloc0", 00:13:49.425 "num_blocks": 8192, 00:13:49.425 "block_size": 4096, 00:13:49.425 "physical_block_size": 4096, 00:13:49.425 "uuid": "97c215ae-9d82-4031-ae1c-c17437de2219", 00:13:49.425 "optimal_io_boundary": 0, 00:13:49.425 "md_size": 0, 00:13:49.425 "dif_type": 0, 00:13:49.425 "dif_is_head_of_md": false, 00:13:49.425 "dif_pi_format": 0 00:13:49.425 } 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "method": "bdev_wait_for_examine" 00:13:49.425 } 00:13:49.425 ] 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "scsi", 00:13:49.425 "config": null 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "scheduler", 00:13:49.425 "config": [ 00:13:49.425 { 00:13:49.425 "method": "framework_set_scheduler", 00:13:49.425 "params": { 00:13:49.425 "name": "static" 00:13:49.425 } 00:13:49.425 } 00:13:49.425 ] 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "vhost_scsi", 00:13:49.425 "config": [] 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "vhost_blk", 00:13:49.425 "config": [] 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "ublk", 00:13:49.425 "config": [ 00:13:49.425 { 00:13:49.425 "method": "ublk_create_target", 00:13:49.425 "params": { 00:13:49.425 "cpumask": "1" 00:13:49.425 } 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "method": "ublk_start_disk", 00:13:49.425 "params": { 00:13:49.425 "bdev_name": "malloc0", 00:13:49.425 "ublk_id": 0, 00:13:49.425 "num_queues": 1, 00:13:49.425 "queue_depth": 128 00:13:49.425 } 00:13:49.425 } 00:13:49.425 ] 00:13:49.425 }, 00:13:49.425 { 00:13:49.425 "subsystem": "nbd", 00:13:49.425 "config": [] 00:13:49.425 }, 00:13:49.425 { 00:13:49.426 "subsystem": "nvmf", 00:13:49.426 "config": [ 00:13:49.426 { 00:13:49.426 "method": "nvmf_set_config", 00:13:49.426 "params": { 00:13:49.426 "discovery_filter": "match_any", 00:13:49.426 "admin_cmd_passthru": { 00:13:49.426 "identify_ctrlr": false 00:13:49.426 }, 00:13:49.426 "dhchap_digests": [ 00:13:49.426 "sha256", 00:13:49.426 "sha384", 00:13:49.426 "sha512" 00:13:49.426 ], 00:13:49.426 "dhchap_dhgroups": [ 00:13:49.426 "null", 00:13:49.426 "ffdhe2048", 00:13:49.426 "ffdhe3072", 00:13:49.426 "ffdhe4096", 00:13:49.426 "ffdhe6144", 00:13:49.426 "ffdhe8192" 00:13:49.426 ] 00:13:49.426 } 00:13:49.426 }, 00:13:49.426 { 00:13:49.426 "method": "nvmf_set_max_subsystems", 00:13:49.426 "params": { 00:13:49.426 "max_subsystems": 1024 00:13:49.426 } 00:13:49.426 }, 00:13:49.426 { 00:13:49.426 "method": "nvmf_set_crdt", 00:13:49.426 "params": { 00:13:49.426 "crdt1": 0, 00:13:49.426 "crdt2": 0, 00:13:49.426 "crdt3": 0 00:13:49.426 } 00:13:49.426 } 00:13:49.426 ] 00:13:49.426 }, 00:13:49.426 { 00:13:49.426 "subsystem": "iscsi", 00:13:49.426 "config": [ 00:13:49.426 { 00:13:49.426 "method": "iscsi_set_options", 00:13:49.426 "params": { 00:13:49.426 "node_base": "iqn.2016-06.io.spdk", 00:13:49.426 "max_sessions": 128, 00:13:49.426 "max_connections_per_session": 2, 00:13:49.426 "max_queue_depth": 64, 00:13:49.426 "default_time2wait": 2, 00:13:49.426 "default_time2retain": 20, 00:13:49.426 "first_burst_length": 8192, 00:13:49.426 "immediate_data": true, 00:13:49.426 "allow_duplicated_isid": false, 00:13:49.426 "error_recovery_level": 0, 00:13:49.426 "nop_timeout": 60, 00:13:49.426 "nop_in_interval": 30, 00:13:49.426 "disable_chap": false, 00:13:49.426 "require_chap": false, 00:13:49.426 "mutual_chap": false, 00:13:49.426 "chap_group": 0, 00:13:49.426 "max_large_datain_per_connection": 64, 00:13:49.426 "max_r2t_per_connection": 4, 00:13:49.426 "pdu_pool_size": 36864, 00:13:49.426 "immediate_data_pool_size": 16384, 00:13:49.426 "data_out_pool_size": 2048 00:13:49.426 } 00:13:49.426 } 00:13:49.426 ] 00:13:49.426 } 00:13:49.426 ] 00:13:49.426 }' 00:13:49.426 11:28:34 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70624 00:13:49.426 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 70624 ']' 00:13:49.426 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 70624 00:13:49.426 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:49.426 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:49.426 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70624 00:13:49.426 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:49.426 killing process with pid 70624 00:13:49.426 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:49.426 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70624' 00:13:49.426 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 70624 00:13:49.426 11:28:34 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 70624 00:13:50.810 [2024-10-27 11:28:35.699905] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:50.810 [2024-10-27 11:28:35.735451] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:50.810 [2024-10-27 11:28:35.735601] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:50.810 [2024-10-27 11:28:35.749324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:50.810 [2024-10-27 11:28:35.749393] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:50.810 [2024-10-27 11:28:35.749418] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:50.810 [2024-10-27 11:28:35.749469] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:50.810 [2024-10-27 11:28:35.749631] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:52.196 11:28:37 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70684 00:13:52.196 11:28:37 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70684 00:13:52.196 11:28:37 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 70684 ']' 00:13:52.196 11:28:37 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.196 11:28:37 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.196 11:28:37 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.196 11:28:37 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:52.196 11:28:37 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.196 11:28:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.196 11:28:37 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:52.196 "subsystems": [ 00:13:52.196 { 00:13:52.196 "subsystem": "fsdev", 00:13:52.196 "config": [ 00:13:52.196 { 00:13:52.196 "method": "fsdev_set_opts", 00:13:52.196 "params": { 00:13:52.196 "fsdev_io_pool_size": 65535, 00:13:52.196 "fsdev_io_cache_size": 256 00:13:52.196 } 00:13:52.196 } 00:13:52.196 ] 00:13:52.196 }, 00:13:52.196 { 00:13:52.196 "subsystem": "keyring", 00:13:52.196 "config": [] 00:13:52.196 }, 00:13:52.196 { 00:13:52.196 "subsystem": "iobuf", 00:13:52.196 "config": [ 00:13:52.196 { 00:13:52.196 "method": "iobuf_set_options", 00:13:52.196 "params": { 00:13:52.196 "small_pool_count": 8192, 00:13:52.196 "large_pool_count": 1024, 00:13:52.196 "small_bufsize": 8192, 00:13:52.196 "large_bufsize": 135168, 00:13:52.196 "enable_numa": false 00:13:52.196 } 00:13:52.196 } 00:13:52.196 ] 00:13:52.196 }, 00:13:52.196 { 00:13:52.196 "subsystem": "sock", 00:13:52.196 "config": [ 00:13:52.196 { 00:13:52.196 "method": "sock_set_default_impl", 00:13:52.196 "params": { 00:13:52.196 "impl_name": "posix" 00:13:52.196 } 00:13:52.196 }, 00:13:52.196 { 00:13:52.196 "method": "sock_impl_set_options", 00:13:52.196 "params": { 00:13:52.196 "impl_name": "ssl", 00:13:52.196 "recv_buf_size": 4096, 00:13:52.196 "send_buf_size": 4096, 00:13:52.196 "enable_recv_pipe": true, 00:13:52.196 "enable_quickack": false, 00:13:52.196 "enable_placement_id": 0, 00:13:52.196 "enable_zerocopy_send_server": true, 00:13:52.196 "enable_zerocopy_send_client": false, 00:13:52.196 "zerocopy_threshold": 0, 00:13:52.196 "tls_version": 0, 00:13:52.196 "enable_ktls": false 00:13:52.196 } 00:13:52.196 }, 00:13:52.196 { 00:13:52.196 "method": "sock_impl_set_options", 00:13:52.196 "params": { 00:13:52.196 "impl_name": "posix", 00:13:52.196 "recv_buf_size": 2097152, 00:13:52.196 "send_buf_size": 2097152, 00:13:52.196 "enable_recv_pipe": true, 00:13:52.196 "enable_quickack": false, 00:13:52.196 "enable_placement_id": 0, 00:13:52.196 "enable_zerocopy_send_server": true, 00:13:52.196 "enable_zerocopy_send_client": false, 00:13:52.196 "zerocopy_threshold": 0, 00:13:52.196 "tls_version": 0, 00:13:52.196 "enable_ktls": false 00:13:52.196 } 00:13:52.196 } 00:13:52.196 ] 00:13:52.196 }, 00:13:52.196 { 00:13:52.196 "subsystem": "vmd", 00:13:52.196 "config": [] 00:13:52.196 }, 00:13:52.196 { 00:13:52.196 "subsystem": "accel", 00:13:52.196 "config": [ 00:13:52.196 { 00:13:52.196 "method": "accel_set_options", 00:13:52.196 "params": { 00:13:52.196 "small_cache_size": 128, 00:13:52.196 "large_cache_size": 16, 00:13:52.196 "task_count": 2048, 00:13:52.196 "sequence_count": 2048, 00:13:52.196 "buf_count": 2048 00:13:52.196 } 00:13:52.196 } 00:13:52.196 ] 00:13:52.196 }, 00:13:52.196 { 00:13:52.196 "subsystem": "bdev", 00:13:52.196 "config": [ 00:13:52.196 { 00:13:52.197 "method": "bdev_set_options", 00:13:52.197 "params": { 00:13:52.197 "bdev_io_pool_size": 65535, 00:13:52.197 "bdev_io_cache_size": 256, 00:13:52.197 "bdev_auto_examine": true, 00:13:52.197 "iobuf_small_cache_size": 128, 00:13:52.197 "iobuf_large_cache_size": 16 00:13:52.197 } 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "method": "bdev_raid_set_options", 00:13:52.197 "params": { 00:13:52.197 "process_window_size_kb": 1024, 00:13:52.197 "process_max_bandwidth_mb_sec": 0 00:13:52.197 } 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "method": "bdev_iscsi_set_options", 00:13:52.197 "params": { 00:13:52.197 "timeout_sec": 30 00:13:52.197 } 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "method": "bdev_nvme_set_options", 00:13:52.197 "params": { 00:13:52.197 "action_on_timeout": "none", 00:13:52.197 "timeout_us": 0, 00:13:52.197 "timeout_admin_us": 0, 00:13:52.197 "keep_alive_timeout_ms": 10000, 00:13:52.197 "arbitration_burst": 0, 00:13:52.197 "low_priority_weight": 0, 00:13:52.197 "medium_priority_weight": 0, 00:13:52.197 "high_priority_weight": 0, 00:13:52.197 "nvme_adminq_poll_period_us": 10000, 00:13:52.197 "nvme_ioq_poll_period_us": 0, 00:13:52.197 "io_queue_requests": 0, 00:13:52.197 "delay_cmd_submit": true, 00:13:52.197 "transport_retry_count": 4, 00:13:52.197 "bdev_retry_count": 3, 00:13:52.197 "transport_ack_timeout": 0, 00:13:52.197 "ctrlr_loss_timeout_sec": 0, 00:13:52.197 "reconnect_delay_sec": 0, 00:13:52.197 "fast_io_fail_timeout_sec": 0, 00:13:52.197 "disable_auto_failback": false, 00:13:52.197 "generate_uuids": false, 00:13:52.197 "transport_tos": 0, 00:13:52.197 "nvme_error_stat": false, 00:13:52.197 "rdma_srq_size": 0, 00:13:52.197 "io_path_stat": false, 00:13:52.197 "allow_accel_sequence": false, 00:13:52.197 "rdma_max_cq_size": 0, 00:13:52.197 "rdma_cm_event_timeout_ms": 0, 00:13:52.197 "dhchap_digests": [ 00:13:52.197 "sha256", 00:13:52.197 "sha384", 00:13:52.197 "sha512" 00:13:52.197 ], 00:13:52.197 "dhchap_dhgroups": [ 00:13:52.197 "null", 00:13:52.197 "ffdhe2048", 00:13:52.197 "ffdhe3072", 00:13:52.197 "ffdhe4096", 00:13:52.197 "ffdhe6144", 00:13:52.197 "ffdhe8192" 00:13:52.197 ] 00:13:52.197 } 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "method": "bdev_nvme_set_hotplug", 00:13:52.197 "params": { 00:13:52.197 "period_us": 100000, 00:13:52.197 "enable": false 00:13:52.197 } 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "method": "bdev_malloc_create", 00:13:52.197 "params": { 00:13:52.197 "name": "malloc0", 00:13:52.197 "num_blocks": 8192, 00:13:52.197 "block_size": 4096, 00:13:52.197 "physical_block_size": 4096, 00:13:52.197 "uuid": "97c215ae-9d82-4031-ae1c-c17437de2219", 00:13:52.197 "optimal_io_boundary": 0, 00:13:52.197 "md_size": 0, 00:13:52.197 "dif_type": 0, 00:13:52.197 "dif_is_head_of_md": false, 00:13:52.197 "dif_pi_format": 0 00:13:52.197 } 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "method": "bdev_wait_for_examine" 00:13:52.197 } 00:13:52.197 ] 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "subsystem": "scsi", 00:13:52.197 "config": null 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "subsystem": "scheduler", 00:13:52.197 "config": [ 00:13:52.197 { 00:13:52.197 "method": "framework_set_scheduler", 00:13:52.197 "params": { 00:13:52.197 "name": "static" 00:13:52.197 } 00:13:52.197 } 00:13:52.197 ] 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "subsystem": "vhost_scsi", 00:13:52.197 "config": [] 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "subsystem": "vhost_blk", 00:13:52.197 "config": [] 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "subsystem": "ublk", 00:13:52.197 "config": [ 00:13:52.197 { 00:13:52.197 "method": "ublk_create_target", 00:13:52.197 "params": { 00:13:52.197 "cpumask": "1" 00:13:52.197 } 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "method": "ublk_start_disk", 00:13:52.197 "params": { 00:13:52.197 "bdev_name": "malloc0", 00:13:52.197 "ublk_id": 0, 00:13:52.197 "num_queues": 1, 00:13:52.197 "queue_depth": 128 00:13:52.197 } 00:13:52.197 } 00:13:52.197 ] 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "subsystem": "nbd", 00:13:52.197 "config": [] 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "subsystem": "nvmf", 00:13:52.197 "config": [ 00:13:52.197 { 00:13:52.197 "method": "nvmf_set_config", 00:13:52.197 "params": { 00:13:52.197 "discovery_filter": "match_any", 00:13:52.197 "admin_cmd_passthru": { 00:13:52.197 "identify_ctrlr": false 00:13:52.197 }, 00:13:52.197 "dhchap_digests": [ 00:13:52.197 "sha256", 00:13:52.197 "sha384", 00:13:52.197 "sha512" 00:13:52.197 ], 00:13:52.197 "dhchap_dhgroups": [ 00:13:52.197 "null", 00:13:52.197 "ffdhe2048", 00:13:52.197 "ffdhe3072", 00:13:52.197 "ffdhe4096", 00:13:52.197 "ffdhe6144", 00:13:52.197 "ffdhe8192" 00:13:52.197 ] 00:13:52.197 } 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "method": "nvmf_set_max_subsystems", 00:13:52.197 "params": { 00:13:52.197 "max_subsystems": 1024 00:13:52.197 } 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "method": "nvmf_set_crdt", 00:13:52.197 "params": { 00:13:52.197 "crdt1": 0, 00:13:52.197 "crdt2": 0, 00:13:52.197 "crdt3": 0 00:13:52.197 } 00:13:52.197 } 00:13:52.197 ] 00:13:52.197 }, 00:13:52.197 { 00:13:52.197 "subsystem": "iscsi", 00:13:52.197 "config": [ 00:13:52.197 { 00:13:52.197 "method": "iscsi_set_options", 00:13:52.197 "params": { 00:13:52.197 "node_base": "iqn.2016-06.io.spdk", 00:13:52.197 "max_sessions": 128, 00:13:52.197 "max_connections_per_session": 2, 00:13:52.197 "max_queue_depth": 64, 00:13:52.197 "default_time2wait": 2, 00:13:52.197 "default_time2retain": 20, 00:13:52.197 "first_burst_length": 8192, 00:13:52.197 "immediate_data": true, 00:13:52.197 "allow_duplicated_isid": false, 00:13:52.197 "error_recovery_level": 0, 00:13:52.197 "nop_timeout": 60, 00:13:52.197 "nop_in_interval": 30, 00:13:52.197 "disable_chap": false, 00:13:52.197 "require_chap": false, 00:13:52.197 "mutual_chap": false, 00:13:52.197 "chap_group": 0, 00:13:52.197 "max_large_datain_per_connection": 64, 00:13:52.197 "max_r2t_per_connection": 4, 00:13:52.197 "pdu_pool_size": 36864, 00:13:52.197 "immediate_data_pool_size": 16384, 00:13:52.197 "data_out_pool_size": 2048 00:13:52.197 } 00:13:52.197 } 00:13:52.197 ] 00:13:52.197 } 00:13:52.197 ] 00:13:52.197 }' 00:13:52.197 [2024-10-27 11:28:37.364964] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:13:52.197 [2024-10-27 11:28:37.365083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70684 ] 00:13:52.458 [2024-10-27 11:28:37.520549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.458 [2024-10-27 11:28:37.598696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.029 [2024-10-27 11:28:38.231365] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:53.029 [2024-10-27 11:28:38.231989] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:53.029 [2024-10-27 11:28:38.239400] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:53.029 [2024-10-27 11:28:38.239459] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:53.029 [2024-10-27 11:28:38.239467] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:53.029 [2024-10-27 11:28:38.239472] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:53.029 [2024-10-27 11:28:38.248363] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:53.029 [2024-10-27 11:28:38.248382] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:53.029 [2024-10-27 11:28:38.255314] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:53.029 [2024-10-27 11:28:38.255387] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:53.029 [2024-10-27 11:28:38.272312] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:53.029 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70684 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 70684 ']' 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 70684 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70684 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:53.290 killing process with pid 70684 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70684' 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 70684 00:13:53.290 11:28:38 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 70684 00:13:54.234 [2024-10-27 11:28:39.356599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:54.234 [2024-10-27 11:28:39.395370] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:54.234 [2024-10-27 11:28:39.395472] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:54.234 [2024-10-27 11:28:39.403325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:54.234 [2024-10-27 11:28:39.403366] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:54.234 [2024-10-27 11:28:39.403372] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:54.234 [2024-10-27 11:28:39.403387] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:54.234 [2024-10-27 11:28:39.403494] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:55.617 11:28:40 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:55.617 ************************************ 00:13:55.617 END TEST test_save_ublk_config 00:13:55.617 ************************************ 00:13:55.617 00:13:55.617 real 0m7.515s 00:13:55.617 user 0m5.053s 00:13:55.617 sys 0m3.065s 00:13:55.617 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.617 11:28:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:55.617 11:28:40 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70756 00:13:55.617 11:28:40 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:55.617 11:28:40 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70756 00:13:55.617 11:28:40 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:55.617 11:28:40 ublk -- common/autotest_common.sh@831 -- # '[' -z 70756 ']' 00:13:55.617 11:28:40 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.617 11:28:40 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.617 11:28:40 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.617 11:28:40 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.617 11:28:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.617 [2024-10-27 11:28:40.680469] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:13:55.617 [2024-10-27 11:28:40.680873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70756 ] 00:13:55.617 [2024-10-27 11:28:40.830467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:55.878 [2024-10-27 11:28:40.907406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.878 [2024-10-27 11:28:40.907490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.449 11:28:41 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.449 11:28:41 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:56.449 11:28:41 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:56.449 11:28:41 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:56.449 11:28:41 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.449 11:28:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.449 ************************************ 00:13:56.449 START TEST test_create_ublk 00:13:56.449 ************************************ 00:13:56.449 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:56.449 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:56.449 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.449 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.450 [2024-10-27 11:28:41.537313] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:56.450 [2024-10-27 11:28:41.538830] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:56.450 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.450 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:56.450 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:56.450 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.450 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.450 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.450 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:56.450 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:56.450 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.450 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.450 [2024-10-27 11:28:41.696426] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:56.450 [2024-10-27 11:28:41.696726] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:56.450 [2024-10-27 11:28:41.696739] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:56.450 [2024-10-27 11:28:41.696745] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:56.450 [2024-10-27 11:28:41.704326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:56.450 [2024-10-27 11:28:41.704344] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:56.450 [2024-10-27 11:28:41.712324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:56.450 [2024-10-27 11:28:41.720365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:56.711 [2024-10-27 11:28:41.751324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:56.711 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:56.711 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.711 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.711 11:28:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:56.711 { 00:13:56.711 "ublk_device": "/dev/ublkb0", 00:13:56.711 "id": 0, 00:13:56.711 "queue_depth": 512, 00:13:56.711 "num_queues": 4, 00:13:56.711 "bdev_name": "Malloc0" 00:13:56.711 } 00:13:56.711 ]' 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:56.711 11:28:41 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:56.711 11:28:41 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:56.711 11:28:41 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:56.711 11:28:41 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:56.711 11:28:41 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:56.711 11:28:41 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:56.711 11:28:41 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:56.711 11:28:41 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:56.711 11:28:41 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:56.711 11:28:41 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:56.711 11:28:41 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:56.711 11:28:41 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:56.972 fio: verification read phase will never start because write phase uses all of runtime 00:13:56.972 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:56.972 fio-3.35 00:13:56.972 Starting 1 process 00:14:07.115 00:14:07.115 fio_test: (groupid=0, jobs=1): err= 0: pid=70796: Sun Oct 27 11:28:52 2024 00:14:07.115 write: IOPS=14.8k, BW=57.8MiB/s (60.6MB/s)(578MiB/10001msec); 0 zone resets 00:14:07.115 clat (usec): min=38, max=3989, avg=66.77, stdev=96.32 00:14:07.115 lat (usec): min=38, max=3989, avg=67.21, stdev=96.33 00:14:07.115 clat percentiles (usec): 00:14:07.115 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 56], 20.00th=[ 58], 00:14:07.115 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 64], 00:14:07.115 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 72], 95.00th=[ 75], 00:14:07.115 | 99.00th=[ 85], 99.50th=[ 98], 99.90th=[ 1991], 99.95th=[ 2802], 00:14:07.115 | 99.99th=[ 3523] 00:14:07.115 bw ( KiB/s): min=57544, max=61248, per=99.95%, avg=59194.11, stdev=1174.15, samples=19 00:14:07.115 iops : min=14386, max=15312, avg=14798.53, stdev=293.54, samples=19 00:14:07.115 lat (usec) : 50=0.41%, 100=99.10%, 250=0.27%, 500=0.03%, 750=0.02% 00:14:07.115 lat (usec) : 1000=0.01% 00:14:07.115 lat (msec) : 2=0.06%, 4=0.10% 00:14:07.115 cpu : usr=2.17%, sys=13.33%, ctx=148071, majf=0, minf=796 00:14:07.115 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:07.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.115 issued rwts: total=0,148071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.115 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:07.115 00:14:07.115 Run status group 0 (all jobs): 00:14:07.115 WRITE: bw=57.8MiB/s (60.6MB/s), 57.8MiB/s-57.8MiB/s (60.6MB/s-60.6MB/s), io=578MiB (606MB), run=10001-10001msec 00:14:07.115 00:14:07.115 Disk stats (read/write): 00:14:07.115 ublkb0: ios=0/146517, merge=0/0, ticks=0/8217, in_queue=8218, util=99.09% 00:14:07.115 11:28:52 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.115 [2024-10-27 11:28:52.152863] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:07.115 [2024-10-27 11:28:52.205356] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:07.115 [2024-10-27 11:28:52.206041] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:07.115 [2024-10-27 11:28:52.217384] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:07.115 [2024-10-27 11:28:52.217637] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:07.115 [2024-10-27 11:28:52.217652] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.115 11:28:52 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.115 [2024-10-27 11:28:52.232370] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:07.115 request: 00:14:07.115 { 00:14:07.115 "ublk_id": 0, 00:14:07.115 "method": "ublk_stop_disk", 00:14:07.115 "req_id": 1 00:14:07.115 } 00:14:07.115 Got JSON-RPC error response 00:14:07.115 response: 00:14:07.115 { 00:14:07.115 "code": -19, 00:14:07.115 "message": "No such device" 00:14:07.115 } 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.115 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.116 11:28:52 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:07.116 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.116 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.116 [2024-10-27 11:28:52.248387] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:07.116 [2024-10-27 11:28:52.256316] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:07.116 [2024-10-27 11:28:52.256350] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:07.116 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.116 11:28:52 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:07.116 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.116 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.374 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.374 11:28:52 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:07.374 11:28:52 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:07.374 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.374 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.374 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.374 11:28:52 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:07.374 11:28:52 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:07.632 11:28:52 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:07.632 11:28:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:07.632 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.632 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.632 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.632 11:28:52 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:07.632 11:28:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:07.632 ************************************ 00:14:07.632 END TEST test_create_ublk 00:14:07.632 ************************************ 00:14:07.632 11:28:52 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:07.632 00:14:07.632 real 0m11.176s 00:14:07.632 user 0m0.506s 00:14:07.632 sys 0m1.409s 00:14:07.632 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.632 11:28:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.632 11:28:52 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:07.632 11:28:52 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:07.632 11:28:52 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.632 11:28:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.632 ************************************ 00:14:07.632 START TEST test_create_multi_ublk 00:14:07.632 ************************************ 00:14:07.632 11:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:07.632 11:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:07.632 11:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.632 11:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.632 [2024-10-27 11:28:52.751304] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:07.632 [2024-10-27 11:28:52.752890] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:07.632 11:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.632 11:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:07.632 11:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:07.632 11:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.632 11:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:07.632 11:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.632 11:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.891 11:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.891 11:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:07.891 11:28:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:07.891 11:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.891 11:28:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.891 [2024-10-27 11:28:52.975421] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:07.891 [2024-10-27 11:28:52.975722] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:07.891 [2024-10-27 11:28:52.975735] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:07.891 [2024-10-27 11:28:52.975744] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:07.891 [2024-10-27 11:28:52.987332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:07.891 [2024-10-27 11:28:52.987352] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:07.891 [2024-10-27 11:28:52.999319] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:07.891 [2024-10-27 11:28:52.999804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:07.891 [2024-10-27 11:28:53.039326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:07.891 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.891 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:07.891 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:07.891 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:07.891 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.891 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.156 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.156 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:08.156 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:08.156 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.156 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.156 [2024-10-27 11:28:53.279411] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:08.156 [2024-10-27 11:28:53.279705] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:08.156 [2024-10-27 11:28:53.279717] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:08.156 [2024-10-27 11:28:53.279722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.156 [2024-10-27 11:28:53.291331] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.156 [2024-10-27 11:28:53.291347] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.156 [2024-10-27 11:28:53.303321] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.156 [2024-10-27 11:28:53.303807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:08.156 [2024-10-27 11:28:53.339325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.156 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.156 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:08.156 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.156 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:08.156 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.156 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.418 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.418 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:08.418 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:08.418 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.418 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.418 [2024-10-27 11:28:53.555406] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:08.418 [2024-10-27 11:28:53.555704] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:08.418 [2024-10-27 11:28:53.555716] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:08.418 [2024-10-27 11:28:53.555722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.418 [2024-10-27 11:28:53.563350] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.418 [2024-10-27 11:28:53.563368] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.418 [2024-10-27 11:28:53.571321] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.418 [2024-10-27 11:28:53.571802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:08.418 [2024-10-27 11:28:53.580338] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.418 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.418 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:08.418 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.418 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:08.418 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.418 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.676 [2024-10-27 11:28:53.739414] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:08.676 [2024-10-27 11:28:53.739705] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:08.676 [2024-10-27 11:28:53.739718] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:08.676 [2024-10-27 11:28:53.739723] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.676 [2024-10-27 11:28:53.747346] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.676 [2024-10-27 11:28:53.747362] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.676 [2024-10-27 11:28:53.755329] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.676 [2024-10-27 11:28:53.755820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:08.676 [2024-10-27 11:28:53.762368] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:08.676 { 00:14:08.676 "ublk_device": "/dev/ublkb0", 00:14:08.676 "id": 0, 00:14:08.676 "queue_depth": 512, 00:14:08.676 "num_queues": 4, 00:14:08.676 "bdev_name": "Malloc0" 00:14:08.676 }, 00:14:08.676 { 00:14:08.676 "ublk_device": "/dev/ublkb1", 00:14:08.676 "id": 1, 00:14:08.676 "queue_depth": 512, 00:14:08.676 "num_queues": 4, 00:14:08.676 "bdev_name": "Malloc1" 00:14:08.676 }, 00:14:08.676 { 00:14:08.676 "ublk_device": "/dev/ublkb2", 00:14:08.676 "id": 2, 00:14:08.676 "queue_depth": 512, 00:14:08.676 "num_queues": 4, 00:14:08.676 "bdev_name": "Malloc2" 00:14:08.676 }, 00:14:08.676 { 00:14:08.676 "ublk_device": "/dev/ublkb3", 00:14:08.676 "id": 3, 00:14:08.676 "queue_depth": 512, 00:14:08.676 "num_queues": 4, 00:14:08.676 "bdev_name": "Malloc3" 00:14:08.676 } 00:14:08.676 ]' 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:08.676 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:08.934 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:08.934 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.934 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:08.934 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:08.934 11:28:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:08.934 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.192 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.192 [2024-10-27 11:28:54.442389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:09.451 [2024-10-27 11:28:54.482735] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:09.451 [2024-10-27 11:28:54.484011] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:09.451 [2024-10-27 11:28:54.490326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:09.451 [2024-10-27 11:28:54.490569] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:09.451 [2024-10-27 11:28:54.490582] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.451 [2024-10-27 11:28:54.505379] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:09.451 [2024-10-27 11:28:54.538358] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:09.451 [2024-10-27 11:28:54.539157] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:09.451 [2024-10-27 11:28:54.546319] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:09.451 [2024-10-27 11:28:54.546552] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:09.451 [2024-10-27 11:28:54.546565] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.451 [2024-10-27 11:28:54.562381] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:09.451 [2024-10-27 11:28:54.606362] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:09.451 [2024-10-27 11:28:54.607122] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:09.451 [2024-10-27 11:28:54.610538] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:09.451 [2024-10-27 11:28:54.610772] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:09.451 [2024-10-27 11:28:54.610784] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.451 [2024-10-27 11:28:54.629385] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:09.451 [2024-10-27 11:28:54.661335] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:09.451 [2024-10-27 11:28:54.662036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:09.451 [2024-10-27 11:28:54.669324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:09.451 [2024-10-27 11:28:54.669572] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:09.451 [2024-10-27 11:28:54.669584] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.451 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:09.709 [2024-10-27 11:28:54.861366] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:09.709 [2024-10-27 11:28:54.869311] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:09.709 [2024-10-27 11:28:54.869339] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:09.709 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:09.709 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.709 11:28:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:09.709 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.709 11:28:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:09.967 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.967 11:28:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:09.967 11:28:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:09.967 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.967 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.533 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.533 11:28:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:10.534 11:28:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:10.534 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.534 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.534 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.534 11:28:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:10.534 11:28:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:10.534 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.534 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.791 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.791 11:28:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:10.791 11:28:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:10.791 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.791 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.791 11:28:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.791 11:28:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:10.791 11:28:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:10.791 11:28:56 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:10.791 11:28:56 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:10.791 11:28:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.791 11:28:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.791 11:28:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.791 11:28:56 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:10.791 11:28:56 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:10.791 ************************************ 00:14:10.791 END TEST test_create_multi_ublk 00:14:10.791 ************************************ 00:14:10.791 11:28:56 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:10.791 00:14:10.792 real 0m3.330s 00:14:10.792 user 0m0.820s 00:14:10.792 sys 0m0.153s 00:14:10.792 11:28:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.792 11:28:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.050 11:28:56 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:11.050 11:28:56 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:11.050 11:28:56 ublk -- ublk/ublk.sh@130 -- # killprocess 70756 00:14:11.050 11:28:56 ublk -- common/autotest_common.sh@950 -- # '[' -z 70756 ']' 00:14:11.050 11:28:56 ublk -- common/autotest_common.sh@954 -- # kill -0 70756 00:14:11.050 11:28:56 ublk -- common/autotest_common.sh@955 -- # uname 00:14:11.050 11:28:56 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.050 11:28:56 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70756 00:14:11.050 killing process with pid 70756 00:14:11.050 11:28:56 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:11.050 11:28:56 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:11.050 11:28:56 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70756' 00:14:11.050 11:28:56 ublk -- common/autotest_common.sh@969 -- # kill 70756 00:14:11.050 11:28:56 ublk -- common/autotest_common.sh@974 -- # wait 70756 00:14:11.615 [2024-10-27 11:28:56.653945] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:11.615 [2024-10-27 11:28:56.653990] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:12.186 00:14:12.186 real 0m24.434s 00:14:12.186 user 0m34.733s 00:14:12.186 sys 0m9.588s 00:14:12.186 11:28:57 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.186 11:28:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.186 ************************************ 00:14:12.186 END TEST ublk 00:14:12.186 ************************************ 00:14:12.186 11:28:57 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:12.187 11:28:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:12.187 11:28:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.187 11:28:57 -- common/autotest_common.sh@10 -- # set +x 00:14:12.187 ************************************ 00:14:12.187 START TEST ublk_recovery 00:14:12.187 ************************************ 00:14:12.187 11:28:57 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:12.187 * Looking for test storage... 00:14:12.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:12.187 11:28:57 ublk_recovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:12.187 11:28:57 ublk_recovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:12.187 11:28:57 ublk_recovery -- common/autotest_common.sh@1689 -- # lcov --version 00:14:12.447 11:28:57 ublk_recovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.447 11:28:57 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:12.447 11:28:57 ublk_recovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.447 11:28:57 ublk_recovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:12.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.447 --rc genhtml_branch_coverage=1 00:14:12.447 --rc genhtml_function_coverage=1 00:14:12.447 --rc genhtml_legend=1 00:14:12.447 --rc geninfo_all_blocks=1 00:14:12.447 --rc geninfo_unexecuted_blocks=1 00:14:12.447 00:14:12.447 ' 00:14:12.447 11:28:57 ublk_recovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:12.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.447 --rc genhtml_branch_coverage=1 00:14:12.447 --rc genhtml_function_coverage=1 00:14:12.447 --rc genhtml_legend=1 00:14:12.447 --rc geninfo_all_blocks=1 00:14:12.447 --rc geninfo_unexecuted_blocks=1 00:14:12.447 00:14:12.447 ' 00:14:12.448 11:28:57 ublk_recovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:12.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.448 --rc genhtml_branch_coverage=1 00:14:12.448 --rc genhtml_function_coverage=1 00:14:12.448 --rc genhtml_legend=1 00:14:12.448 --rc geninfo_all_blocks=1 00:14:12.448 --rc geninfo_unexecuted_blocks=1 00:14:12.448 00:14:12.448 ' 00:14:12.448 11:28:57 ublk_recovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:12.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.448 --rc genhtml_branch_coverage=1 00:14:12.448 --rc genhtml_function_coverage=1 00:14:12.448 --rc genhtml_legend=1 00:14:12.448 --rc geninfo_all_blocks=1 00:14:12.448 --rc geninfo_unexecuted_blocks=1 00:14:12.448 00:14:12.448 ' 00:14:12.448 11:28:57 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:12.448 11:28:57 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:12.448 11:28:57 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:12.448 11:28:57 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:12.448 11:28:57 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:12.448 11:28:57 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:12.448 11:28:57 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:12.448 11:28:57 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:12.448 11:28:57 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:12.448 11:28:57 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:12.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.448 11:28:57 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71144 00:14:12.448 11:28:57 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:12.448 11:28:57 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71144 00:14:12.448 11:28:57 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71144 ']' 00:14:12.448 11:28:57 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.448 11:28:57 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.448 11:28:57 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.448 11:28:57 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.448 11:28:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 11:28:57 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:12.448 [2024-10-27 11:28:57.583515] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:14:12.448 [2024-10-27 11:28:57.583645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71144 ] 00:14:12.707 [2024-10-27 11:28:57.738900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:12.707 [2024-10-27 11:28:57.815250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.707 [2024-10-27 11:28:57.815344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.273 11:28:58 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.273 11:28:58 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:13.273 11:28:58 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:13.273 11:28:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.273 11:28:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.273 [2024-10-27 11:28:58.365314] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:13.273 [2024-10-27 11:28:58.366803] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:13.273 11:28:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.273 11:28:58 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:13.273 11:28:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.273 11:28:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.273 malloc0 00:14:13.273 11:28:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.273 11:28:58 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:13.273 11:28:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.273 11:28:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.273 [2024-10-27 11:28:58.445616] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:13.273 [2024-10-27 11:28:58.445698] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:13.273 [2024-10-27 11:28:58.445706] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:13.273 [2024-10-27 11:28:58.445714] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:13.273 [2024-10-27 11:28:58.454405] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:13.273 [2024-10-27 11:28:58.454420] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:13.273 [2024-10-27 11:28:58.461339] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:13.273 [2024-10-27 11:28:58.461453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:13.273 [2024-10-27 11:28:58.476324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:13.273 1 00:14:13.273 11:28:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.273 11:28:58 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:14.647 11:28:59 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71179 00:14:14.647 11:28:59 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:14.647 11:28:59 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:14.647 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:14.647 fio-3.35 00:14:14.647 Starting 1 process 00:14:19.914 11:29:04 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71144 00:14:19.914 11:29:04 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:25.205 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71144 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:25.205 11:29:09 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71293 00:14:25.205 11:29:09 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:25.205 11:29:09 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71293 00:14:25.205 11:29:09 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71293 ']' 00:14:25.205 11:29:09 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:25.205 11:29:09 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.205 11:29:09 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.205 11:29:09 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.205 11:29:09 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.205 11:29:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.205 [2024-10-27 11:29:09.578673] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:14:25.205 [2024-10-27 11:29:09.578784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71293 ] 00:14:25.205 [2024-10-27 11:29:09.734932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:25.205 [2024-10-27 11:29:09.836887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.205 [2024-10-27 11:29:09.837001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.205 11:29:10 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.205 11:29:10 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:25.205 11:29:10 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:25.205 11:29:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.205 11:29:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.205 [2024-10-27 11:29:10.441327] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:25.205 [2024-10-27 11:29:10.443174] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:25.205 11:29:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.205 11:29:10 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:25.205 11:29:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.205 11:29:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.464 malloc0 00:14:25.464 11:29:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.464 11:29:10 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:25.464 11:29:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.464 11:29:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:25.464 [2024-10-27 11:29:10.545462] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:25.464 [2024-10-27 11:29:10.545497] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:25.464 [2024-10-27 11:29:10.545506] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:25.464 [2024-10-27 11:29:10.553358] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:25.464 [2024-10-27 11:29:10.553375] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:25.464 [2024-10-27 11:29:10.553384] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:25.464 [2024-10-27 11:29:10.553463] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:25.464 1 00:14:25.464 11:29:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.464 11:29:10 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71179 00:14:25.464 [2024-10-27 11:29:10.561337] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:25.464 [2024-10-27 11:29:10.567718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:25.464 [2024-10-27 11:29:10.575326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:25.464 [2024-10-27 11:29:10.575349] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:21.687 00:15:21.687 fio_test: (groupid=0, jobs=1): err= 0: pid=71182: Sun Oct 27 11:29:59 2024 00:15:21.687 read: IOPS=26.7k, BW=104MiB/s (109MB/s)(6264MiB/60002msec) 00:15:21.687 slat (nsec): min=959, max=1246.8k, avg=4918.14, stdev=1818.23 00:15:21.687 clat (usec): min=663, max=6095.2k, avg=2343.92, stdev=37878.25 00:15:21.687 lat (usec): min=798, max=6095.2k, avg=2348.84, stdev=37878.24 00:15:21.687 clat percentiles (usec): 00:15:21.687 | 1.00th=[ 1778], 5.00th=[ 1909], 10.00th=[ 1926], 20.00th=[ 1958], 00:15:21.687 | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 2008], 60.00th=[ 2008], 00:15:21.687 | 70.00th=[ 2024], 80.00th=[ 2040], 90.00th=[ 2089], 95.00th=[ 2900], 00:15:21.687 | 99.00th=[ 4883], 99.50th=[ 5473], 99.90th=[ 6521], 99.95th=[ 7242], 00:15:21.687 | 99.99th=[13042] 00:15:21.687 bw ( KiB/s): min=10712, max=123096, per=100.00%, avg=117765.93, stdev=14074.78, samples=108 00:15:21.687 iops : min= 2678, max=30774, avg=29441.48, stdev=3518.69, samples=108 00:15:21.687 write: IOPS=26.7k, BW=104MiB/s (109MB/s)(6259MiB/60002msec); 0 zone resets 00:15:21.687 slat (nsec): min=993, max=386572, avg=4946.62, stdev=1476.39 00:15:21.687 clat (usec): min=634, max=6095.4k, avg=2436.42, stdev=39099.52 00:15:21.687 lat (usec): min=638, max=6095.4k, avg=2441.37, stdev=39099.52 00:15:21.687 clat percentiles (usec): 00:15:21.687 | 1.00th=[ 1827], 5.00th=[ 1991], 10.00th=[ 2024], 20.00th=[ 2040], 00:15:21.687 | 30.00th=[ 2057], 40.00th=[ 2073], 50.00th=[ 2089], 60.00th=[ 2114], 00:15:21.687 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2180], 95.00th=[ 2802], 00:15:21.687 | 99.00th=[ 4817], 99.50th=[ 5604], 99.90th=[ 6587], 99.95th=[ 7504], 00:15:21.687 | 99.99th=[13042] 00:15:21.687 bw ( KiB/s): min=10824, max=122760, per=100.00%, avg=117642.15, stdev=14088.12, samples=108 00:15:21.687 iops : min= 2706, max=30690, avg=29410.54, stdev=3522.03, samples=108 00:15:21.687 lat (usec) : 750=0.01%, 1000=0.01% 00:15:21.687 lat (msec) : 2=28.67%, 4=69.01%, 10=2.31%, 20=0.01%, >=2000=0.01% 00:15:21.687 cpu : usr=5.86%, sys=27.25%, ctx=106937, majf=0, minf=13 00:15:21.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:21.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:21.687 issued rwts: total=1603695,1602298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.687 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:21.687 00:15:21.687 Run status group 0 (all jobs): 00:15:21.687 READ: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=6264MiB (6569MB), run=60002-60002msec 00:15:21.687 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=6259MiB (6563MB), run=60002-60002msec 00:15:21.687 00:15:21.687 Disk stats (read/write): 00:15:21.687 ublkb1: ios=1600385/1599038, merge=0/0, ticks=3668730/3683226, in_queue=7351956, util=99.89% 00:15:21.687 11:29:59 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.687 [2024-10-27 11:29:59.738810] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:21.687 [2024-10-27 11:29:59.778331] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:21.687 [2024-10-27 11:29:59.778497] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:21.687 [2024-10-27 11:29:59.780428] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:21.687 [2024-10-27 11:29:59.780513] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:21.687 [2024-10-27 11:29:59.780521] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.687 11:29:59 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.687 [2024-10-27 11:29:59.791384] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:21.687 [2024-10-27 11:29:59.799311] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:21.687 [2024-10-27 11:29:59.799344] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.687 11:29:59 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:21.687 11:29:59 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:21.687 11:29:59 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71293 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 71293 ']' 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 71293 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71293 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.687 killing process with pid 71293 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71293' 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@969 -- # kill 71293 00:15:21.687 11:29:59 ublk_recovery -- common/autotest_common.sh@974 -- # wait 71293 00:15:21.687 [2024-10-27 11:30:00.872852] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:21.687 [2024-10-27 11:30:00.872895] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:21.687 ************************************ 00:15:21.687 END TEST ublk_recovery 00:15:21.687 ************************************ 00:15:21.687 00:15:21.687 real 1m4.235s 00:15:21.687 user 1m45.452s 00:15:21.687 sys 0m32.029s 00:15:21.687 11:30:01 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.687 11:30:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:21.687 11:30:01 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:21.687 11:30:01 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:21.687 11:30:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.687 11:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:21.687 11:30:01 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:21.687 11:30:01 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:21.687 11:30:01 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:21.687 11:30:01 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:21.687 11:30:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:21.687 11:30:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:21.687 11:30:01 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:21.687 11:30:01 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:21.687 11:30:01 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:21.687 11:30:01 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:21.687 11:30:01 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:21.688 11:30:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:21.688 11:30:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.688 11:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:21.688 ************************************ 00:15:21.688 START TEST ftl 00:15:21.688 ************************************ 00:15:21.688 11:30:01 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:21.688 * Looking for test storage... 00:15:21.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.688 11:30:01 ftl -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:21.688 11:30:01 ftl -- common/autotest_common.sh@1689 -- # lcov --version 00:15:21.688 11:30:01 ftl -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:21.688 11:30:01 ftl -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:21.688 11:30:01 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.688 11:30:01 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.688 11:30:01 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.688 11:30:01 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.688 11:30:01 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.688 11:30:01 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.688 11:30:01 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.688 11:30:01 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.688 11:30:01 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.688 11:30:01 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.688 11:30:01 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.688 11:30:01 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:21.688 11:30:01 ftl -- scripts/common.sh@345 -- # : 1 00:15:21.688 11:30:01 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.688 11:30:01 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.688 11:30:01 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:21.688 11:30:01 ftl -- scripts/common.sh@353 -- # local d=1 00:15:21.688 11:30:01 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.688 11:30:01 ftl -- scripts/common.sh@355 -- # echo 1 00:15:21.688 11:30:01 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.688 11:30:01 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:21.688 11:30:01 ftl -- scripts/common.sh@353 -- # local d=2 00:15:21.688 11:30:01 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.688 11:30:01 ftl -- scripts/common.sh@355 -- # echo 2 00:15:21.688 11:30:01 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.688 11:30:01 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.688 11:30:01 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.688 11:30:01 ftl -- scripts/common.sh@368 -- # return 0 00:15:21.688 11:30:01 ftl -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.688 11:30:01 ftl -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.688 --rc genhtml_branch_coverage=1 00:15:21.688 --rc genhtml_function_coverage=1 00:15:21.688 --rc genhtml_legend=1 00:15:21.688 --rc geninfo_all_blocks=1 00:15:21.688 --rc geninfo_unexecuted_blocks=1 00:15:21.688 00:15:21.688 ' 00:15:21.688 11:30:01 ftl -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.688 --rc genhtml_branch_coverage=1 00:15:21.688 --rc genhtml_function_coverage=1 00:15:21.688 --rc genhtml_legend=1 00:15:21.688 --rc geninfo_all_blocks=1 00:15:21.688 --rc geninfo_unexecuted_blocks=1 00:15:21.688 00:15:21.688 ' 00:15:21.688 11:30:01 ftl -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.688 --rc genhtml_branch_coverage=1 00:15:21.688 --rc genhtml_function_coverage=1 00:15:21.688 --rc genhtml_legend=1 00:15:21.688 --rc geninfo_all_blocks=1 00:15:21.688 --rc geninfo_unexecuted_blocks=1 00:15:21.688 00:15:21.688 ' 00:15:21.688 11:30:01 ftl -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:21.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.688 --rc genhtml_branch_coverage=1 00:15:21.688 --rc genhtml_function_coverage=1 00:15:21.688 --rc genhtml_legend=1 00:15:21.688 --rc geninfo_all_blocks=1 00:15:21.688 --rc geninfo_unexecuted_blocks=1 00:15:21.688 00:15:21.688 ' 00:15:21.688 11:30:01 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:21.688 11:30:01 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:21.688 11:30:01 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.688 11:30:01 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.688 11:30:01 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:21.688 11:30:01 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:21.688 11:30:01 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.688 11:30:01 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:21.688 11:30:01 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:21.688 11:30:01 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.688 11:30:01 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.688 11:30:01 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:21.688 11:30:01 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:21.688 11:30:01 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:21.688 11:30:01 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:21.688 11:30:01 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:21.688 11:30:01 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:21.688 11:30:01 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.688 11:30:01 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.688 11:30:01 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:21.688 11:30:01 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:21.688 11:30:01 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:21.688 11:30:01 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:21.688 11:30:01 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:21.688 11:30:01 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:21.688 11:30:01 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:21.688 11:30:01 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:21.688 11:30:01 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:21.688 11:30:01 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:21.688 11:30:01 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.688 11:30:01 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:21.688 11:30:01 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:21.688 11:30:01 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:21.688 11:30:01 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:21.688 11:30:01 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:21.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:21.688 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:21.688 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:21.688 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:21.688 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:21.688 11:30:02 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72103 00:15:21.688 11:30:02 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72103 00:15:21.688 11:30:02 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:21.688 11:30:02 ftl -- common/autotest_common.sh@831 -- # '[' -z 72103 ']' 00:15:21.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.688 11:30:02 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.688 11:30:02 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.688 11:30:02 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.688 11:30:02 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.688 11:30:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:21.688 [2024-10-27 11:30:02.474983] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:15:21.689 [2024-10-27 11:30:02.475140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72103 ] 00:15:21.689 [2024-10-27 11:30:02.638524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.689 [2024-10-27 11:30:02.761768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.689 11:30:03 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.689 11:30:03 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:21.689 11:30:03 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:21.689 11:30:03 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:21.689 11:30:04 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:21.689 11:30:04 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:21.689 11:30:04 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:21.689 11:30:04 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:21.689 11:30:04 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@50 -- # break 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@63 -- # break 00:15:21.689 11:30:05 ftl -- ftl/ftl.sh@66 -- # killprocess 72103 00:15:21.689 11:30:05 ftl -- common/autotest_common.sh@950 -- # '[' -z 72103 ']' 00:15:21.689 11:30:05 ftl -- common/autotest_common.sh@954 -- # kill -0 72103 00:15:21.689 11:30:05 ftl -- common/autotest_common.sh@955 -- # uname 00:15:21.689 11:30:05 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.689 11:30:05 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72103 00:15:21.689 11:30:05 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.689 killing process with pid 72103 00:15:21.689 11:30:05 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.689 11:30:05 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72103' 00:15:21.689 11:30:05 ftl -- common/autotest_common.sh@969 -- # kill 72103 00:15:21.689 11:30:05 ftl -- common/autotest_common.sh@974 -- # wait 72103 00:15:21.689 11:30:06 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:21.689 11:30:06 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:21.689 11:30:06 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:21.689 11:30:06 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.689 11:30:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:21.689 ************************************ 00:15:21.689 START TEST ftl_fio_basic 00:15:21.689 ************************************ 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:21.689 * Looking for test storage... 00:15:21.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # lcov --version 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:21.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.689 --rc genhtml_branch_coverage=1 00:15:21.689 --rc genhtml_function_coverage=1 00:15:21.689 --rc genhtml_legend=1 00:15:21.689 --rc geninfo_all_blocks=1 00:15:21.689 --rc geninfo_unexecuted_blocks=1 00:15:21.689 00:15:21.689 ' 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:21.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.689 --rc genhtml_branch_coverage=1 00:15:21.689 --rc genhtml_function_coverage=1 00:15:21.689 --rc genhtml_legend=1 00:15:21.689 --rc geninfo_all_blocks=1 00:15:21.689 --rc geninfo_unexecuted_blocks=1 00:15:21.689 00:15:21.689 ' 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:21.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.689 --rc genhtml_branch_coverage=1 00:15:21.689 --rc genhtml_function_coverage=1 00:15:21.689 --rc genhtml_legend=1 00:15:21.689 --rc geninfo_all_blocks=1 00:15:21.689 --rc geninfo_unexecuted_blocks=1 00:15:21.689 00:15:21.689 ' 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:21.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.689 --rc genhtml_branch_coverage=1 00:15:21.689 --rc genhtml_function_coverage=1 00:15:21.689 --rc genhtml_legend=1 00:15:21.689 --rc geninfo_all_blocks=1 00:15:21.689 --rc geninfo_unexecuted_blocks=1 00:15:21.689 00:15:21.689 ' 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:21.689 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72235 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72235 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72235 ']' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:21.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.690 11:30:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:21.690 [2024-10-27 11:30:06.769614] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:15:21.690 [2024-10-27 11:30:06.769741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72235 ] 00:15:21.690 [2024-10-27 11:30:06.927372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:21.952 [2024-10-27 11:30:07.013683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.952 [2024-10-27 11:30:07.014002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.952 [2024-10-27 11:30:07.014007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.525 11:30:07 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:22.525 11:30:07 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:22.525 11:30:07 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:22.525 11:30:07 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:22.525 11:30:07 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:22.525 11:30:07 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:22.525 11:30:07 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:22.525 11:30:07 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:22.786 11:30:07 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:22.786 11:30:07 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:22.786 11:30:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:22.786 11:30:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:22.786 11:30:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:22.786 11:30:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:22.786 11:30:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:22.786 11:30:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:22.786 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:22.786 { 00:15:22.786 "name": "nvme0n1", 00:15:22.786 "aliases": [ 00:15:22.786 "9ce67769-53cb-4fc9-9ee7-3ea13aabbd5b" 00:15:22.786 ], 00:15:22.786 "product_name": "NVMe disk", 00:15:22.786 "block_size": 4096, 00:15:22.786 "num_blocks": 1310720, 00:15:22.786 "uuid": "9ce67769-53cb-4fc9-9ee7-3ea13aabbd5b", 00:15:22.786 "numa_id": -1, 00:15:22.786 "assigned_rate_limits": { 00:15:22.786 "rw_ios_per_sec": 0, 00:15:22.786 "rw_mbytes_per_sec": 0, 00:15:22.786 "r_mbytes_per_sec": 0, 00:15:22.786 "w_mbytes_per_sec": 0 00:15:22.786 }, 00:15:22.786 "claimed": false, 00:15:22.786 "zoned": false, 00:15:22.786 "supported_io_types": { 00:15:22.786 "read": true, 00:15:22.786 "write": true, 00:15:22.786 "unmap": true, 00:15:22.786 "flush": true, 00:15:22.786 "reset": true, 00:15:22.786 "nvme_admin": true, 00:15:22.786 "nvme_io": true, 00:15:22.786 "nvme_io_md": false, 00:15:22.786 "write_zeroes": true, 00:15:22.786 "zcopy": false, 00:15:22.786 "get_zone_info": false, 00:15:22.786 "zone_management": false, 00:15:22.786 "zone_append": false, 00:15:22.786 "compare": true, 00:15:22.786 "compare_and_write": false, 00:15:22.786 "abort": true, 00:15:22.786 "seek_hole": false, 00:15:22.786 "seek_data": false, 00:15:22.786 "copy": true, 00:15:22.786 "nvme_iov_md": false 00:15:22.786 }, 00:15:22.786 "driver_specific": { 00:15:22.786 "nvme": [ 00:15:22.786 { 00:15:22.786 "pci_address": "0000:00:11.0", 00:15:22.786 "trid": { 00:15:22.786 "trtype": "PCIe", 00:15:22.786 "traddr": "0000:00:11.0" 00:15:22.786 }, 00:15:22.786 "ctrlr_data": { 00:15:22.786 "cntlid": 0, 00:15:22.786 "vendor_id": "0x1b36", 00:15:22.786 "model_number": "QEMU NVMe Ctrl", 00:15:22.786 "serial_number": "12341", 00:15:22.786 "firmware_revision": "8.0.0", 00:15:22.786 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:22.786 "oacs": { 00:15:22.786 "security": 0, 00:15:22.786 "format": 1, 00:15:22.786 "firmware": 0, 00:15:22.786 "ns_manage": 1 00:15:22.786 }, 00:15:22.786 "multi_ctrlr": false, 00:15:22.786 "ana_reporting": false 00:15:22.786 }, 00:15:22.786 "vs": { 00:15:22.786 "nvme_version": "1.4" 00:15:22.786 }, 00:15:22.786 "ns_data": { 00:15:22.786 "id": 1, 00:15:22.786 "can_share": false 00:15:22.786 } 00:15:22.786 } 00:15:22.786 ], 00:15:22.786 "mp_policy": "active_passive" 00:15:22.786 } 00:15:22.786 } 00:15:22.786 ]' 00:15:22.786 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:23.048 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:23.310 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=151100c9-4d6d-4c23-ba19-ade17fa3e59a 00:15:23.310 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 151100c9-4d6d-4c23-ba19-ade17fa3e59a 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:23.572 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:23.831 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:23.831 { 00:15:23.831 "name": "d5b7981e-ffea-435e-a39d-2758f53c9eae", 00:15:23.831 "aliases": [ 00:15:23.831 "lvs/nvme0n1p0" 00:15:23.831 ], 00:15:23.831 "product_name": "Logical Volume", 00:15:23.831 "block_size": 4096, 00:15:23.831 "num_blocks": 26476544, 00:15:23.831 "uuid": "d5b7981e-ffea-435e-a39d-2758f53c9eae", 00:15:23.831 "assigned_rate_limits": { 00:15:23.831 "rw_ios_per_sec": 0, 00:15:23.831 "rw_mbytes_per_sec": 0, 00:15:23.831 "r_mbytes_per_sec": 0, 00:15:23.831 "w_mbytes_per_sec": 0 00:15:23.831 }, 00:15:23.831 "claimed": false, 00:15:23.831 "zoned": false, 00:15:23.831 "supported_io_types": { 00:15:23.831 "read": true, 00:15:23.831 "write": true, 00:15:23.831 "unmap": true, 00:15:23.831 "flush": false, 00:15:23.831 "reset": true, 00:15:23.831 "nvme_admin": false, 00:15:23.831 "nvme_io": false, 00:15:23.831 "nvme_io_md": false, 00:15:23.831 "write_zeroes": true, 00:15:23.831 "zcopy": false, 00:15:23.831 "get_zone_info": false, 00:15:23.831 "zone_management": false, 00:15:23.831 "zone_append": false, 00:15:23.831 "compare": false, 00:15:23.831 "compare_and_write": false, 00:15:23.831 "abort": false, 00:15:23.832 "seek_hole": true, 00:15:23.832 "seek_data": true, 00:15:23.832 "copy": false, 00:15:23.832 "nvme_iov_md": false 00:15:23.832 }, 00:15:23.832 "driver_specific": { 00:15:23.832 "lvol": { 00:15:23.832 "lvol_store_uuid": "151100c9-4d6d-4c23-ba19-ade17fa3e59a", 00:15:23.832 "base_bdev": "nvme0n1", 00:15:23.832 "thin_provision": true, 00:15:23.832 "num_allocated_clusters": 0, 00:15:23.832 "snapshot": false, 00:15:23.832 "clone": false, 00:15:23.832 "esnap_clone": false 00:15:23.832 } 00:15:23.832 } 00:15:23.832 } 00:15:23.832 ]' 00:15:23.832 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:23.832 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:23.832 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:23.832 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:23.832 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:23.832 11:30:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:23.832 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:23.832 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:23.832 11:30:08 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:24.090 11:30:09 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:24.090 11:30:09 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:24.090 11:30:09 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:24.090 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:24.090 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:24.090 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:24.090 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:24.090 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:24.349 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:24.349 { 00:15:24.349 "name": "d5b7981e-ffea-435e-a39d-2758f53c9eae", 00:15:24.349 "aliases": [ 00:15:24.349 "lvs/nvme0n1p0" 00:15:24.349 ], 00:15:24.349 "product_name": "Logical Volume", 00:15:24.349 "block_size": 4096, 00:15:24.349 "num_blocks": 26476544, 00:15:24.349 "uuid": "d5b7981e-ffea-435e-a39d-2758f53c9eae", 00:15:24.349 "assigned_rate_limits": { 00:15:24.349 "rw_ios_per_sec": 0, 00:15:24.349 "rw_mbytes_per_sec": 0, 00:15:24.349 "r_mbytes_per_sec": 0, 00:15:24.349 "w_mbytes_per_sec": 0 00:15:24.349 }, 00:15:24.349 "claimed": false, 00:15:24.349 "zoned": false, 00:15:24.349 "supported_io_types": { 00:15:24.349 "read": true, 00:15:24.349 "write": true, 00:15:24.349 "unmap": true, 00:15:24.349 "flush": false, 00:15:24.349 "reset": true, 00:15:24.349 "nvme_admin": false, 00:15:24.349 "nvme_io": false, 00:15:24.349 "nvme_io_md": false, 00:15:24.349 "write_zeroes": true, 00:15:24.349 "zcopy": false, 00:15:24.349 "get_zone_info": false, 00:15:24.349 "zone_management": false, 00:15:24.349 "zone_append": false, 00:15:24.349 "compare": false, 00:15:24.349 "compare_and_write": false, 00:15:24.349 "abort": false, 00:15:24.349 "seek_hole": true, 00:15:24.349 "seek_data": true, 00:15:24.349 "copy": false, 00:15:24.349 "nvme_iov_md": false 00:15:24.349 }, 00:15:24.349 "driver_specific": { 00:15:24.349 "lvol": { 00:15:24.349 "lvol_store_uuid": "151100c9-4d6d-4c23-ba19-ade17fa3e59a", 00:15:24.349 "base_bdev": "nvme0n1", 00:15:24.349 "thin_provision": true, 00:15:24.349 "num_allocated_clusters": 0, 00:15:24.349 "snapshot": false, 00:15:24.349 "clone": false, 00:15:24.349 "esnap_clone": false 00:15:24.349 } 00:15:24.349 } 00:15:24.349 } 00:15:24.349 ]' 00:15:24.349 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:24.349 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:24.349 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:24.349 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:24.350 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:24.350 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:24.350 11:30:09 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:24.350 11:30:09 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:24.608 11:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:24.608 11:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:24.608 11:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:24.608 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:24.608 11:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:24.608 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:24.608 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:24.608 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:24.608 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:24.608 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d5b7981e-ffea-435e-a39d-2758f53c9eae 00:15:24.866 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:24.866 { 00:15:24.866 "name": "d5b7981e-ffea-435e-a39d-2758f53c9eae", 00:15:24.866 "aliases": [ 00:15:24.866 "lvs/nvme0n1p0" 00:15:24.866 ], 00:15:24.866 "product_name": "Logical Volume", 00:15:24.866 "block_size": 4096, 00:15:24.866 "num_blocks": 26476544, 00:15:24.866 "uuid": "d5b7981e-ffea-435e-a39d-2758f53c9eae", 00:15:24.866 "assigned_rate_limits": { 00:15:24.866 "rw_ios_per_sec": 0, 00:15:24.866 "rw_mbytes_per_sec": 0, 00:15:24.866 "r_mbytes_per_sec": 0, 00:15:24.866 "w_mbytes_per_sec": 0 00:15:24.866 }, 00:15:24.866 "claimed": false, 00:15:24.866 "zoned": false, 00:15:24.866 "supported_io_types": { 00:15:24.866 "read": true, 00:15:24.866 "write": true, 00:15:24.866 "unmap": true, 00:15:24.866 "flush": false, 00:15:24.866 "reset": true, 00:15:24.866 "nvme_admin": false, 00:15:24.866 "nvme_io": false, 00:15:24.866 "nvme_io_md": false, 00:15:24.866 "write_zeroes": true, 00:15:24.866 "zcopy": false, 00:15:24.866 "get_zone_info": false, 00:15:24.866 "zone_management": false, 00:15:24.866 "zone_append": false, 00:15:24.866 "compare": false, 00:15:24.866 "compare_and_write": false, 00:15:24.866 "abort": false, 00:15:24.866 "seek_hole": true, 00:15:24.866 "seek_data": true, 00:15:24.866 "copy": false, 00:15:24.866 "nvme_iov_md": false 00:15:24.866 }, 00:15:24.866 "driver_specific": { 00:15:24.866 "lvol": { 00:15:24.866 "lvol_store_uuid": "151100c9-4d6d-4c23-ba19-ade17fa3e59a", 00:15:24.866 "base_bdev": "nvme0n1", 00:15:24.866 "thin_provision": true, 00:15:24.866 "num_allocated_clusters": 0, 00:15:24.866 "snapshot": false, 00:15:24.866 "clone": false, 00:15:24.866 "esnap_clone": false 00:15:24.866 } 00:15:24.866 } 00:15:24.866 } 00:15:24.866 ]' 00:15:24.866 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:24.866 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:24.866 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:24.866 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:24.866 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:24.866 11:30:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:24.866 11:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:24.866 11:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:24.866 11:30:09 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d5b7981e-ffea-435e-a39d-2758f53c9eae -c nvc0n1p0 --l2p_dram_limit 60 00:15:25.126 [2024-10-27 11:30:10.183886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.126 [2024-10-27 11:30:10.184097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:25.126 [2024-10-27 11:30:10.184119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:25.126 [2024-10-27 11:30:10.184126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.126 [2024-10-27 11:30:10.184188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.126 [2024-10-27 11:30:10.184196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:25.126 [2024-10-27 11:30:10.184203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:15:25.126 [2024-10-27 11:30:10.184211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.126 [2024-10-27 11:30:10.184247] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:25.126 [2024-10-27 11:30:10.184826] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:25.126 [2024-10-27 11:30:10.184844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.126 [2024-10-27 11:30:10.184850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:25.126 [2024-10-27 11:30:10.184858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:15:25.126 [2024-10-27 11:30:10.184864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.126 [2024-10-27 11:30:10.184927] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5cb2f974-6cc6-4c6a-8575-80770559fc01 00:15:25.126 [2024-10-27 11:30:10.185909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.126 [2024-10-27 11:30:10.185938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:25.126 [2024-10-27 11:30:10.185947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:15:25.126 [2024-10-27 11:30:10.185954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.126 [2024-10-27 11:30:10.190662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.126 [2024-10-27 11:30:10.190774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:25.126 [2024-10-27 11:30:10.190787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.643 ms 00:15:25.126 [2024-10-27 11:30:10.190794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.126 [2024-10-27 11:30:10.190880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.126 [2024-10-27 11:30:10.190890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:25.126 [2024-10-27 11:30:10.190897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:25.126 [2024-10-27 11:30:10.190906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.126 [2024-10-27 11:30:10.190962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.126 [2024-10-27 11:30:10.190970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:25.126 [2024-10-27 11:30:10.190976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:25.126 [2024-10-27 11:30:10.190983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.126 [2024-10-27 11:30:10.191012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:25.126 [2024-10-27 11:30:10.193922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.126 [2024-10-27 11:30:10.194025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:25.126 [2024-10-27 11:30:10.194040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:15:25.126 [2024-10-27 11:30:10.194046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.126 [2024-10-27 11:30:10.194091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.126 [2024-10-27 11:30:10.194100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:25.126 [2024-10-27 11:30:10.194108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:25.126 [2024-10-27 11:30:10.194113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.126 [2024-10-27 11:30:10.194142] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:25.126 [2024-10-27 11:30:10.194255] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:25.126 [2024-10-27 11:30:10.194268] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:25.126 [2024-10-27 11:30:10.194277] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:25.126 [2024-10-27 11:30:10.194286] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:25.126 [2024-10-27 11:30:10.194309] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:25.126 [2024-10-27 11:30:10.194317] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:25.126 [2024-10-27 11:30:10.194323] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:25.126 [2024-10-27 11:30:10.194330] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:25.126 [2024-10-27 11:30:10.194336] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:25.126 [2024-10-27 11:30:10.194343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.126 [2024-10-27 11:30:10.194348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:25.126 [2024-10-27 11:30:10.194359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:15:25.126 [2024-10-27 11:30:10.194364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.126 [2024-10-27 11:30:10.194442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.126 [2024-10-27 11:30:10.194448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:25.126 [2024-10-27 11:30:10.194455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:25.126 [2024-10-27 11:30:10.194460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.126 [2024-10-27 11:30:10.194556] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:25.126 [2024-10-27 11:30:10.194566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:25.126 [2024-10-27 11:30:10.194573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:25.126 [2024-10-27 11:30:10.194579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:25.126 [2024-10-27 11:30:10.194592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:25.126 [2024-10-27 11:30:10.194605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:25.126 [2024-10-27 11:30:10.194611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:25.126 [2024-10-27 11:30:10.194623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:25.126 [2024-10-27 11:30:10.194628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:25.126 [2024-10-27 11:30:10.194634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:25.126 [2024-10-27 11:30:10.194639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:25.126 [2024-10-27 11:30:10.194646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:25.126 [2024-10-27 11:30:10.194654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:25.126 [2024-10-27 11:30:10.194668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:25.126 [2024-10-27 11:30:10.194674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:25.126 [2024-10-27 11:30:10.194686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.126 [2024-10-27 11:30:10.194697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:25.126 [2024-10-27 11:30:10.194702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.126 [2024-10-27 11:30:10.194714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:25.126 [2024-10-27 11:30:10.194720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.126 [2024-10-27 11:30:10.194731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:25.126 [2024-10-27 11:30:10.194736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:25.126 [2024-10-27 11:30:10.194747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:25.126 [2024-10-27 11:30:10.194755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:25.126 [2024-10-27 11:30:10.194766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:25.126 [2024-10-27 11:30:10.194782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:25.126 [2024-10-27 11:30:10.194788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:25.126 [2024-10-27 11:30:10.194793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:25.126 [2024-10-27 11:30:10.194799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:25.126 [2024-10-27 11:30:10.194804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:25.126 [2024-10-27 11:30:10.194815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:25.126 [2024-10-27 11:30:10.194822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194827] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:25.126 [2024-10-27 11:30:10.194835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:25.126 [2024-10-27 11:30:10.194840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:25.126 [2024-10-27 11:30:10.194846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:25.126 [2024-10-27 11:30:10.194854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:25.126 [2024-10-27 11:30:10.194862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:25.127 [2024-10-27 11:30:10.194867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:25.127 [2024-10-27 11:30:10.194873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:25.127 [2024-10-27 11:30:10.194879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:25.127 [2024-10-27 11:30:10.194885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:25.127 [2024-10-27 11:30:10.194893] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:25.127 [2024-10-27 11:30:10.194901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:25.127 [2024-10-27 11:30:10.194908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:25.127 [2024-10-27 11:30:10.194915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:25.127 [2024-10-27 11:30:10.194920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:25.127 [2024-10-27 11:30:10.194927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:25.127 [2024-10-27 11:30:10.194932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:25.127 [2024-10-27 11:30:10.194939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:25.127 [2024-10-27 11:30:10.194944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:25.127 [2024-10-27 11:30:10.194951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:25.127 [2024-10-27 11:30:10.194956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:25.127 [2024-10-27 11:30:10.194964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:25.127 [2024-10-27 11:30:10.194969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:25.127 [2024-10-27 11:30:10.194976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:25.127 [2024-10-27 11:30:10.194981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:25.127 [2024-10-27 11:30:10.194988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:25.127 [2024-10-27 11:30:10.194994] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:25.127 [2024-10-27 11:30:10.195001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:25.127 [2024-10-27 11:30:10.195007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:25.127 [2024-10-27 11:30:10.195014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:25.127 [2024-10-27 11:30:10.195019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:25.127 [2024-10-27 11:30:10.195027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:25.127 [2024-10-27 11:30:10.195033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.127 [2024-10-27 11:30:10.195040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:25.127 [2024-10-27 11:30:10.195047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:15:25.127 [2024-10-27 11:30:10.195053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.127 [2024-10-27 11:30:10.195125] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:25.127 [2024-10-27 11:30:10.195136] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:27.651 [2024-10-27 11:30:12.431763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.651 [2024-10-27 11:30:12.431931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:27.651 [2024-10-27 11:30:12.431952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2236.627 ms 00:15:27.651 [2024-10-27 11:30:12.431966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.651 [2024-10-27 11:30:12.457150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.651 [2024-10-27 11:30:12.457193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:27.651 [2024-10-27 11:30:12.457204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.972 ms 00:15:27.651 [2024-10-27 11:30:12.457214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.651 [2024-10-27 11:30:12.457355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.651 [2024-10-27 11:30:12.457368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:27.651 [2024-10-27 11:30:12.457377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:15:27.651 [2024-10-27 11:30:12.457388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.651 [2024-10-27 11:30:12.499254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.651 [2024-10-27 11:30:12.499314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:27.651 [2024-10-27 11:30:12.499329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.801 ms 00:15:27.651 [2024-10-27 11:30:12.499345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.651 [2024-10-27 11:30:12.499396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.499408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:27.652 [2024-10-27 11:30:12.499418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:27.652 [2024-10-27 11:30:12.499429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.499819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.499844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:27.652 [2024-10-27 11:30:12.499855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:15:27.652 [2024-10-27 11:30:12.499865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.500008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.500020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:27.652 [2024-10-27 11:30:12.500029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:15:27.652 [2024-10-27 11:30:12.500041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.515271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.515434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:27.652 [2024-10-27 11:30:12.515450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.198 ms 00:15:27.652 [2024-10-27 11:30:12.515459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.526677] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:27.652 [2024-10-27 11:30:12.541175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.541218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:27.652 [2024-10-27 11:30:12.541229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.614 ms 00:15:27.652 [2024-10-27 11:30:12.541237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.589285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.589330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:27.652 [2024-10-27 11:30:12.589344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.014 ms 00:15:27.652 [2024-10-27 11:30:12.589353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.589546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.589556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:27.652 [2024-10-27 11:30:12.589568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:15:27.652 [2024-10-27 11:30:12.589575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.612882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.613002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:27.652 [2024-10-27 11:30:12.613023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.250 ms 00:15:27.652 [2024-10-27 11:30:12.613033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.635877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.635987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:27.652 [2024-10-27 11:30:12.636006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.802 ms 00:15:27.652 [2024-10-27 11:30:12.636013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.636615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.636634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:27.652 [2024-10-27 11:30:12.636645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:15:27.652 [2024-10-27 11:30:12.636652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.702916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.703033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:27.652 [2024-10-27 11:30:12.703055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.210 ms 00:15:27.652 [2024-10-27 11:30:12.703063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.727822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.727854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:27.652 [2024-10-27 11:30:12.727866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.664 ms 00:15:27.652 [2024-10-27 11:30:12.727874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.751063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.751092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:27.652 [2024-10-27 11:30:12.751104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.140 ms 00:15:27.652 [2024-10-27 11:30:12.751111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.775213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.775243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:27.652 [2024-10-27 11:30:12.775256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.050 ms 00:15:27.652 [2024-10-27 11:30:12.775262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.775329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.775339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:27.652 [2024-10-27 11:30:12.775351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:27.652 [2024-10-27 11:30:12.775358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.775450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.652 [2024-10-27 11:30:12.775460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:27.652 [2024-10-27 11:30:12.775469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:27.652 [2024-10-27 11:30:12.775476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.652 [2024-10-27 11:30:12.776370] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2592.027 ms, result 0 00:15:27.652 { 00:15:27.652 "name": "ftl0", 00:15:27.652 "uuid": "5cb2f974-6cc6-4c6a-8575-80770559fc01" 00:15:27.652 } 00:15:27.652 11:30:12 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:27.652 11:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:27.652 11:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.652 11:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:27.652 11:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.652 11:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.652 11:30:12 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:27.910 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:27.910 [ 00:15:27.911 { 00:15:27.911 "name": "ftl0", 00:15:27.911 "aliases": [ 00:15:27.911 "5cb2f974-6cc6-4c6a-8575-80770559fc01" 00:15:27.911 ], 00:15:27.911 "product_name": "FTL disk", 00:15:27.911 "block_size": 4096, 00:15:27.911 "num_blocks": 20971520, 00:15:27.911 "uuid": "5cb2f974-6cc6-4c6a-8575-80770559fc01", 00:15:27.911 "assigned_rate_limits": { 00:15:27.911 "rw_ios_per_sec": 0, 00:15:27.911 "rw_mbytes_per_sec": 0, 00:15:27.911 "r_mbytes_per_sec": 0, 00:15:27.911 "w_mbytes_per_sec": 0 00:15:27.911 }, 00:15:27.911 "claimed": false, 00:15:27.911 "zoned": false, 00:15:27.911 "supported_io_types": { 00:15:27.911 "read": true, 00:15:27.911 "write": true, 00:15:27.911 "unmap": true, 00:15:27.911 "flush": true, 00:15:27.911 "reset": false, 00:15:27.911 "nvme_admin": false, 00:15:27.911 "nvme_io": false, 00:15:27.911 "nvme_io_md": false, 00:15:27.911 "write_zeroes": true, 00:15:27.911 "zcopy": false, 00:15:27.911 "get_zone_info": false, 00:15:27.911 "zone_management": false, 00:15:27.911 "zone_append": false, 00:15:27.911 "compare": false, 00:15:27.911 "compare_and_write": false, 00:15:27.911 "abort": false, 00:15:27.911 "seek_hole": false, 00:15:27.911 "seek_data": false, 00:15:27.911 "copy": false, 00:15:27.911 "nvme_iov_md": false 00:15:27.911 }, 00:15:27.911 "driver_specific": { 00:15:27.911 "ftl": { 00:15:27.911 "base_bdev": "d5b7981e-ffea-435e-a39d-2758f53c9eae", 00:15:27.911 "cache": "nvc0n1p0" 00:15:27.911 } 00:15:27.911 } 00:15:27.911 } 00:15:27.911 ] 00:15:28.170 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:28.170 11:30:13 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:28.170 11:30:13 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:28.170 11:30:13 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:28.170 11:30:13 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:28.427 [2024-10-27 11:30:13.565941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.427 [2024-10-27 11:30:13.565982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:28.427 [2024-10-27 11:30:13.565996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:28.427 [2024-10-27 11:30:13.566005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.427 [2024-10-27 11:30:13.566044] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:28.427 [2024-10-27 11:30:13.568714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.427 [2024-10-27 11:30:13.568742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:28.427 [2024-10-27 11:30:13.568754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.650 ms 00:15:28.427 [2024-10-27 11:30:13.568762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.427 [2024-10-27 11:30:13.569328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.427 [2024-10-27 11:30:13.569343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:28.427 [2024-10-27 11:30:13.569354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:15:28.427 [2024-10-27 11:30:13.569361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.427 [2024-10-27 11:30:13.572612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.427 [2024-10-27 11:30:13.572630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:28.427 [2024-10-27 11:30:13.572643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.227 ms 00:15:28.427 [2024-10-27 11:30:13.572650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.427 [2024-10-27 11:30:13.578814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.427 [2024-10-27 11:30:13.578837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:28.427 [2024-10-27 11:30:13.578849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.124 ms 00:15:28.427 [2024-10-27 11:30:13.578856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.427 [2024-10-27 11:30:13.603142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.427 [2024-10-27 11:30:13.603174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:28.427 [2024-10-27 11:30:13.603186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.204 ms 00:15:28.427 [2024-10-27 11:30:13.603194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.427 [2024-10-27 11:30:13.618655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.427 [2024-10-27 11:30:13.618777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:28.427 [2024-10-27 11:30:13.618798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.399 ms 00:15:28.427 [2024-10-27 11:30:13.618806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.427 [2024-10-27 11:30:13.619014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.427 [2024-10-27 11:30:13.619025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:28.427 [2024-10-27 11:30:13.619035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:15:28.427 [2024-10-27 11:30:13.619042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.427 [2024-10-27 11:30:13.642571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.427 [2024-10-27 11:30:13.642601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:28.427 [2024-10-27 11:30:13.642612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.492 ms 00:15:28.427 [2024-10-27 11:30:13.642619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.427 [2024-10-27 11:30:13.665627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.427 [2024-10-27 11:30:13.665733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:28.427 [2024-10-27 11:30:13.665751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.957 ms 00:15:28.427 [2024-10-27 11:30:13.665758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.427 [2024-10-27 11:30:13.688710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.427 [2024-10-27 11:30:13.688812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:28.427 [2024-10-27 11:30:13.688829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.911 ms 00:15:28.427 [2024-10-27 11:30:13.688836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.686 [2024-10-27 11:30:13.711601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.686 [2024-10-27 11:30:13.711703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:28.686 [2024-10-27 11:30:13.711719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.673 ms 00:15:28.686 [2024-10-27 11:30:13.711726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.686 [2024-10-27 11:30:13.711769] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:28.686 [2024-10-27 11:30:13.711782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:28.686 [2024-10-27 11:30:13.711999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:28.687 [2024-10-27 11:30:13.712668] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:28.687 [2024-10-27 11:30:13.712678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5cb2f974-6cc6-4c6a-8575-80770559fc01 00:15:28.687 [2024-10-27 11:30:13.712685] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:28.687 [2024-10-27 11:30:13.712695] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:28.687 [2024-10-27 11:30:13.712702] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:28.687 [2024-10-27 11:30:13.712710] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:28.688 [2024-10-27 11:30:13.712717] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:28.688 [2024-10-27 11:30:13.712728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:28.688 [2024-10-27 11:30:13.712734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:28.688 [2024-10-27 11:30:13.712742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:28.688 [2024-10-27 11:30:13.712748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:28.688 [2024-10-27 11:30:13.712757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.688 [2024-10-27 11:30:13.712764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:28.688 [2024-10-27 11:30:13.712773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:15:28.688 [2024-10-27 11:30:13.712780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.725354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.688 [2024-10-27 11:30:13.725382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:28.688 [2024-10-27 11:30:13.725393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.524 ms 00:15:28.688 [2024-10-27 11:30:13.725402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.725758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:28.688 [2024-10-27 11:30:13.725767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:28.688 [2024-10-27 11:30:13.725777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:15:28.688 [2024-10-27 11:30:13.725784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.769815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.769849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:28.688 [2024-10-27 11:30:13.769863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.769870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.769942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.769950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:28.688 [2024-10-27 11:30:13.769960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.769967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.770053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.770063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:28.688 [2024-10-27 11:30:13.770072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.770081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.770111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.770118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:28.688 [2024-10-27 11:30:13.770127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.770134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.851912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.851948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:28.688 [2024-10-27 11:30:13.851959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.851968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.915535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.915568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:28.688 [2024-10-27 11:30:13.915580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.915587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.915676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.915685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:28.688 [2024-10-27 11:30:13.915695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.915702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.915784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.915793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:28.688 [2024-10-27 11:30:13.915802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.915809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.915914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.915923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:28.688 [2024-10-27 11:30:13.915933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.915940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.915987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.915997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:28.688 [2024-10-27 11:30:13.916009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.916015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.916060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.916069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:28.688 [2024-10-27 11:30:13.916078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.916085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.916138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:28.688 [2024-10-27 11:30:13.916148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:28.688 [2024-10-27 11:30:13.916157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:28.688 [2024-10-27 11:30:13.916164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:28.688 [2024-10-27 11:30:13.916360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.370 ms, result 0 00:15:28.688 true 00:15:28.688 11:30:13 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72235 00:15:28.688 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72235 ']' 00:15:28.688 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72235 00:15:28.688 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:28.688 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.688 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72235 00:15:28.946 killing process with pid 72235 00:15:28.946 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.946 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.946 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72235' 00:15:28.946 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72235 00:15:28.946 11:30:13 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72235 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:35.568 11:30:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:35.568 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:35.568 fio-3.35 00:15:35.568 Starting 1 thread 00:15:39.775 00:15:39.775 test: (groupid=0, jobs=1): err= 0: pid=72415: Sun Oct 27 11:30:24 2024 00:15:39.775 read: IOPS=1080, BW=71.8MiB/s (75.2MB/s)(255MiB/3547msec) 00:15:39.775 slat (nsec): min=2875, max=28759, avg=4390.48, stdev=2164.51 00:15:39.775 clat (usec): min=244, max=1441, avg=419.26, stdev=159.27 00:15:39.775 lat (usec): min=248, max=1456, avg=423.65, stdev=160.18 00:15:39.775 clat percentiles (usec): 00:15:39.775 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 322], 00:15:39.775 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 392], 00:15:39.775 | 70.00th=[ 457], 80.00th=[ 523], 90.00th=[ 570], 95.00th=[ 848], 00:15:39.775 | 99.00th=[ 930], 99.50th=[ 979], 99.90th=[ 1172], 99.95th=[ 1221], 00:15:39.775 | 99.99th=[ 1450] 00:15:39.775 write: IOPS=1088, BW=72.3MiB/s (75.8MB/s)(256MiB/3544msec); 0 zone resets 00:15:39.775 slat (usec): min=13, max=197, avg=18.57, stdev= 4.77 00:15:39.775 clat (usec): min=289, max=1401, avg=466.89, stdev=186.10 00:15:39.775 lat (usec): min=304, max=1420, avg=485.46, stdev=187.86 00:15:39.775 clat percentiles (usec): 00:15:39.775 | 1.00th=[ 306], 5.00th=[ 310], 10.00th=[ 322], 20.00th=[ 347], 00:15:39.775 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 441], 00:15:39.775 | 70.00th=[ 506], 80.00th=[ 570], 90.00th=[ 676], 95.00th=[ 930], 00:15:39.775 | 99.00th=[ 1074], 99.50th=[ 1205], 99.90th=[ 1336], 99.95th=[ 1369], 00:15:39.775 | 99.99th=[ 1401] 00:15:39.775 bw ( KiB/s): min=37944, max=92208, per=99.76%, avg=73809.14, stdev=21076.45, samples=7 00:15:39.775 iops : min= 558, max= 1356, avg=1085.43, stdev=309.95, samples=7 00:15:39.775 lat (usec) : 250=0.01%, 500=73.31%, 750=18.87%, 1000=6.68% 00:15:39.775 lat (msec) : 2=1.12% 00:15:39.775 cpu : usr=99.27%, sys=0.08%, ctx=8, majf=0, minf=1169 00:15:39.775 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:39.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.775 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.775 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:39.775 00:15:39.775 Run status group 0 (all jobs): 00:15:39.775 READ: bw=71.8MiB/s (75.2MB/s), 71.8MiB/s-71.8MiB/s (75.2MB/s-75.2MB/s), io=255MiB (267MB), run=3547-3547msec 00:15:39.775 WRITE: bw=72.3MiB/s (75.8MB/s), 72.3MiB/s-72.3MiB/s (75.8MB/s-75.8MB/s), io=256MiB (269MB), run=3544-3544msec 00:15:41.161 ----------------------------------------------------- 00:15:41.161 Suppressions used: 00:15:41.161 count bytes template 00:15:41.161 1 5 /usr/src/fio/parse.c 00:15:41.161 1 8 libtcmalloc_minimal.so 00:15:41.161 1 904 libcrypto.so 00:15:41.161 ----------------------------------------------------- 00:15:41.161 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.161 11:30:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:41.161 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:41.161 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:41.161 fio-3.35 00:15:41.161 Starting 2 threads 00:16:07.731 00:16:07.731 first_half: (groupid=0, jobs=1): err= 0: pid=72511: Sun Oct 27 11:30:49 2024 00:16:07.731 read: IOPS=3033, BW=11.8MiB/s (12.4MB/s)(256MiB/21582msec) 00:16:07.731 slat (nsec): min=2910, max=87931, avg=4395.62, stdev=1360.64 00:16:07.731 clat (usec): min=502, max=306969, avg=35537.30, stdev=22881.53 00:16:07.731 lat (usec): min=508, max=306973, avg=35541.70, stdev=22881.67 00:16:07.731 clat percentiles (msec): 00:16:07.731 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 29], 00:16:07.731 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:16:07.731 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 41], 95.00th=[ 68], 00:16:07.731 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 247], 99.95th=[ 266], 00:16:07.731 | 99.99th=[ 300] 00:16:07.731 write: IOPS=3043, BW=11.9MiB/s (12.5MB/s)(256MiB/21534msec); 0 zone resets 00:16:07.731 slat (usec): min=3, max=581, avg= 5.86, stdev= 4.10 00:16:07.731 clat (usec): min=361, max=67334, avg=6624.55, stdev=7180.27 00:16:07.731 lat (usec): min=367, max=67341, avg=6630.42, stdev=7180.56 00:16:07.731 clat percentiles (usec): 00:16:07.731 | 1.00th=[ 734], 5.00th=[ 881], 10.00th=[ 1139], 20.00th=[ 2376], 00:16:07.731 | 30.00th=[ 3064], 40.00th=[ 3851], 50.00th=[ 4752], 60.00th=[ 5342], 00:16:07.731 | 70.00th=[ 5997], 80.00th=[ 8848], 90.00th=[13304], 95.00th=[23725], 00:16:07.731 | 99.00th=[32375], 99.50th=[35390], 99.90th=[61604], 99.95th=[62653], 00:16:07.731 | 99.99th=[65799] 00:16:07.731 bw ( KiB/s): min= 680, max=52320, per=89.12%, avg=21697.00, stdev=15698.37, samples=24 00:16:07.731 iops : min= 170, max=13080, avg=5424.25, stdev=3924.59, samples=24 00:16:07.731 lat (usec) : 500=0.03%, 750=0.62%, 1000=3.14% 00:16:07.731 lat (msec) : 2=4.76%, 4=12.16%, 10=21.20%, 20=6.68%, 50=47.93% 00:16:07.731 lat (msec) : 100=1.78%, 250=1.65%, 500=0.04% 00:16:07.731 cpu : usr=99.25%, sys=0.13%, ctx=48, majf=0, minf=5533 00:16:07.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:07.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.731 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.731 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.731 second_half: (groupid=0, jobs=1): err= 0: pid=72512: Sun Oct 27 11:30:49 2024 00:16:07.731 read: IOPS=3070, BW=12.0MiB/s (12.6MB/s)(256MiB/21327msec) 00:16:07.731 slat (usec): min=2, max=152, avg= 4.49, stdev= 1.49 00:16:07.731 clat (msec): min=8, max=287, avg=35.91, stdev=20.91 00:16:07.731 lat (msec): min=8, max=287, avg=35.91, stdev=20.91 00:16:07.731 clat percentiles (msec): 00:16:07.731 | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 29], 00:16:07.731 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:16:07.731 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 42], 95.00th=[ 66], 00:16:07.731 | 99.00th=[ 140], 99.50th=[ 157], 99.90th=[ 224], 99.95th=[ 247], 00:16:07.731 | 99.99th=[ 279] 00:16:07.731 write: IOPS=3091, BW=12.1MiB/s (12.7MB/s)(256MiB/21201msec); 0 zone resets 00:16:07.731 slat (usec): min=3, max=1701, avg= 5.83, stdev= 7.64 00:16:07.731 clat (usec): min=372, max=32782, avg=5753.95, stdev=4082.26 00:16:07.731 lat (usec): min=379, max=32788, avg=5759.78, stdev=4082.29 00:16:07.731 clat percentiles (usec): 00:16:07.731 | 1.00th=[ 758], 5.00th=[ 1418], 10.00th=[ 2311], 20.00th=[ 2933], 00:16:07.731 | 30.00th=[ 3589], 40.00th=[ 4228], 50.00th=[ 4883], 60.00th=[ 5407], 00:16:07.732 | 70.00th=[ 5735], 80.00th=[ 7570], 90.00th=[11076], 95.00th=[13304], 00:16:07.732 | 99.00th=[22676], 99.50th=[28181], 99.90th=[31327], 99.95th=[31851], 00:16:07.732 | 99.99th=[32113] 00:16:07.732 bw ( KiB/s): min= 104, max=46256, per=92.96%, avg=22634.96, stdev=15431.91, samples=23 00:16:07.732 iops : min= 26, max=11564, avg=5658.74, stdev=3857.98, samples=23 00:16:07.732 lat (usec) : 500=0.02%, 750=0.41%, 1000=0.81% 00:16:07.732 lat (msec) : 2=2.48%, 4=14.28%, 10=25.24%, 20=6.33%, 50=47.06% 00:16:07.732 lat (msec) : 100=1.79%, 250=1.55%, 500=0.02% 00:16:07.732 cpu : usr=99.33%, sys=0.09%, ctx=33, majf=0, minf=5578 00:16:07.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:07.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.732 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.732 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.732 00:16:07.732 Run status group 0 (all jobs): 00:16:07.732 READ: bw=23.7MiB/s (24.9MB/s), 11.8MiB/s-12.0MiB/s (12.4MB/s-12.6MB/s), io=512MiB (536MB), run=21327-21582msec 00:16:07.732 WRITE: bw=23.8MiB/s (24.9MB/s), 11.9MiB/s-12.1MiB/s (12.5MB/s-12.7MB/s), io=512MiB (537MB), run=21201-21534msec 00:16:07.732 ----------------------------------------------------- 00:16:07.732 Suppressions used: 00:16:07.732 count bytes template 00:16:07.732 2 10 /usr/src/fio/parse.c 00:16:07.732 2 192 /usr/src/fio/iolog.c 00:16:07.732 1 8 libtcmalloc_minimal.so 00:16:07.732 1 904 libcrypto.so 00:16:07.732 ----------------------------------------------------- 00:16:07.732 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:07.732 11:30:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:07.732 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:07.732 fio-3.35 00:16:07.732 Starting 1 thread 00:16:22.644 00:16:22.645 test: (groupid=0, jobs=1): err= 0: pid=72804: Sun Oct 27 11:31:05 2024 00:16:22.645 read: IOPS=8219, BW=32.1MiB/s (33.7MB/s)(255MiB/7933msec) 00:16:22.645 slat (nsec): min=2989, max=77657, avg=3532.07, stdev=1062.91 00:16:22.645 clat (usec): min=459, max=46828, avg=15564.68, stdev=1973.11 00:16:22.645 lat (usec): min=465, max=46837, avg=15568.21, stdev=1973.49 00:16:22.645 clat percentiles (usec): 00:16:22.645 | 1.00th=[14353], 5.00th=[14484], 10.00th=[14615], 20.00th=[14746], 00:16:22.645 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:16:22.645 | 70.00th=[15401], 80.00th=[15533], 90.00th=[16319], 95.00th=[19792], 00:16:22.645 | 99.00th=[24249], 99.50th=[25035], 99.90th=[34341], 99.95th=[40633], 00:16:22.645 | 99.99th=[45351] 00:16:22.645 write: IOPS=13.1k, BW=51.1MiB/s (53.6MB/s)(256MiB/5006msec); 0 zone resets 00:16:22.645 slat (usec): min=4, max=311, avg= 5.66, stdev= 2.83 00:16:22.645 clat (usec): min=477, max=44502, avg=9739.86, stdev=9781.06 00:16:22.645 lat (usec): min=482, max=44507, avg=9745.52, stdev=9781.28 00:16:22.645 clat percentiles (usec): 00:16:22.645 | 1.00th=[ 619], 5.00th=[ 717], 10.00th=[ 807], 20.00th=[ 922], 00:16:22.645 | 30.00th=[ 1029], 40.00th=[ 1303], 50.00th=[ 5538], 60.00th=[11600], 00:16:22.645 | 70.00th=[15008], 80.00th=[17695], 90.00th=[26870], 95.00th=[28705], 00:16:22.645 | 99.00th=[31327], 99.50th=[33424], 99.90th=[36439], 99.95th=[36963], 00:16:22.645 | 99.99th=[43254] 00:16:22.645 bw ( KiB/s): min= 1016, max=78432, per=91.01%, avg=47656.55, stdev=22168.42, samples=11 00:16:22.645 iops : min= 254, max=19608, avg=11914.09, stdev=5542.14, samples=11 00:16:22.645 lat (usec) : 500=0.01%, 750=3.39%, 1000=10.36% 00:16:22.645 lat (msec) : 2=6.95%, 4=0.48%, 10=7.45%, 20=60.68%, 50=10.68% 00:16:22.645 cpu : usr=99.11%, sys=0.17%, ctx=17, majf=0, minf=5566 00:16:22.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:22.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.645 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.645 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.645 00:16:22.645 Run status group 0 (all jobs): 00:16:22.645 READ: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=255MiB (267MB), run=7933-7933msec 00:16:22.645 WRITE: bw=51.1MiB/s (53.6MB/s), 51.1MiB/s-51.1MiB/s (53.6MB/s-53.6MB/s), io=256MiB (268MB), run=5006-5006msec 00:16:22.645 ----------------------------------------------------- 00:16:22.645 Suppressions used: 00:16:22.645 count bytes template 00:16:22.645 1 5 /usr/src/fio/parse.c 00:16:22.645 2 192 /usr/src/fio/iolog.c 00:16:22.645 1 8 libtcmalloc_minimal.so 00:16:22.645 1 904 libcrypto.so 00:16:22.645 ----------------------------------------------------- 00:16:22.645 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:22.645 Remove shared memory files 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57080 /dev/shm/spdk_tgt_trace.pid71144 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:22.645 ************************************ 00:16:22.645 END TEST ftl_fio_basic 00:16:22.645 ************************************ 00:16:22.645 00:16:22.645 real 1m1.229s 00:16:22.645 user 2m8.006s 00:16:22.645 sys 0m2.774s 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.645 11:31:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:22.645 11:31:07 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:22.645 11:31:07 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:22.645 11:31:07 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.645 11:31:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:22.645 ************************************ 00:16:22.645 START TEST ftl_bdevperf 00:16:22.645 ************************************ 00:16:22.645 11:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:22.645 * Looking for test storage... 00:16:22.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.645 11:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:22.645 11:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # lcov --version 00:16:22.645 11:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.907 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:22.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.908 --rc genhtml_branch_coverage=1 00:16:22.908 --rc genhtml_function_coverage=1 00:16:22.908 --rc genhtml_legend=1 00:16:22.908 --rc geninfo_all_blocks=1 00:16:22.908 --rc geninfo_unexecuted_blocks=1 00:16:22.908 00:16:22.908 ' 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:22.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.908 --rc genhtml_branch_coverage=1 00:16:22.908 --rc genhtml_function_coverage=1 00:16:22.908 --rc genhtml_legend=1 00:16:22.908 --rc geninfo_all_blocks=1 00:16:22.908 --rc geninfo_unexecuted_blocks=1 00:16:22.908 00:16:22.908 ' 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:22.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.908 --rc genhtml_branch_coverage=1 00:16:22.908 --rc genhtml_function_coverage=1 00:16:22.908 --rc genhtml_legend=1 00:16:22.908 --rc geninfo_all_blocks=1 00:16:22.908 --rc geninfo_unexecuted_blocks=1 00:16:22.908 00:16:22.908 ' 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:22.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.908 --rc genhtml_branch_coverage=1 00:16:22.908 --rc genhtml_function_coverage=1 00:16:22.908 --rc genhtml_legend=1 00:16:22.908 --rc geninfo_all_blocks=1 00:16:22.908 --rc geninfo_unexecuted_blocks=1 00:16:22.908 00:16:22.908 ' 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.908 11:31:07 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73042 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73042 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73042 ']' 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:22.908 11:31:08 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:22.908 [2024-10-27 11:31:08.087249] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:16:22.908 [2024-10-27 11:31:08.088145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73042 ] 00:16:23.169 [2024-10-27 11:31:08.248559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.169 [2024-10-27 11:31:08.372229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.743 11:31:08 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.743 11:31:08 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:23.743 11:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:23.743 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:23.743 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:23.743 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:23.743 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:23.743 11:31:08 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:24.005 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:24.005 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:24.005 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:24.005 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:24.005 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:24.005 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:24.005 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:24.005 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:24.267 { 00:16:24.267 "name": "nvme0n1", 00:16:24.267 "aliases": [ 00:16:24.267 "56d6612d-f0fa-4565-825c-3626fb3e55a4" 00:16:24.267 ], 00:16:24.267 "product_name": "NVMe disk", 00:16:24.267 "block_size": 4096, 00:16:24.267 "num_blocks": 1310720, 00:16:24.267 "uuid": "56d6612d-f0fa-4565-825c-3626fb3e55a4", 00:16:24.267 "numa_id": -1, 00:16:24.267 "assigned_rate_limits": { 00:16:24.267 "rw_ios_per_sec": 0, 00:16:24.267 "rw_mbytes_per_sec": 0, 00:16:24.267 "r_mbytes_per_sec": 0, 00:16:24.267 "w_mbytes_per_sec": 0 00:16:24.267 }, 00:16:24.267 "claimed": true, 00:16:24.267 "claim_type": "read_many_write_one", 00:16:24.267 "zoned": false, 00:16:24.267 "supported_io_types": { 00:16:24.267 "read": true, 00:16:24.267 "write": true, 00:16:24.267 "unmap": true, 00:16:24.267 "flush": true, 00:16:24.267 "reset": true, 00:16:24.267 "nvme_admin": true, 00:16:24.267 "nvme_io": true, 00:16:24.267 "nvme_io_md": false, 00:16:24.267 "write_zeroes": true, 00:16:24.267 "zcopy": false, 00:16:24.267 "get_zone_info": false, 00:16:24.267 "zone_management": false, 00:16:24.267 "zone_append": false, 00:16:24.267 "compare": true, 00:16:24.267 "compare_and_write": false, 00:16:24.267 "abort": true, 00:16:24.267 "seek_hole": false, 00:16:24.267 "seek_data": false, 00:16:24.267 "copy": true, 00:16:24.267 "nvme_iov_md": false 00:16:24.267 }, 00:16:24.267 "driver_specific": { 00:16:24.267 "nvme": [ 00:16:24.267 { 00:16:24.267 "pci_address": "0000:00:11.0", 00:16:24.267 "trid": { 00:16:24.267 "trtype": "PCIe", 00:16:24.267 "traddr": "0000:00:11.0" 00:16:24.267 }, 00:16:24.267 "ctrlr_data": { 00:16:24.267 "cntlid": 0, 00:16:24.267 "vendor_id": "0x1b36", 00:16:24.267 "model_number": "QEMU NVMe Ctrl", 00:16:24.267 "serial_number": "12341", 00:16:24.267 "firmware_revision": "8.0.0", 00:16:24.267 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:24.267 "oacs": { 00:16:24.267 "security": 0, 00:16:24.267 "format": 1, 00:16:24.267 "firmware": 0, 00:16:24.267 "ns_manage": 1 00:16:24.267 }, 00:16:24.267 "multi_ctrlr": false, 00:16:24.267 "ana_reporting": false 00:16:24.267 }, 00:16:24.267 "vs": { 00:16:24.267 "nvme_version": "1.4" 00:16:24.267 }, 00:16:24.267 "ns_data": { 00:16:24.267 "id": 1, 00:16:24.267 "can_share": false 00:16:24.267 } 00:16:24.267 } 00:16:24.267 ], 00:16:24.267 "mp_policy": "active_passive" 00:16:24.267 } 00:16:24.267 } 00:16:24.267 ]' 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:24.267 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:24.539 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=151100c9-4d6d-4c23-ba19-ade17fa3e59a 00:16:24.539 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:24.539 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 151100c9-4d6d-4c23-ba19-ade17fa3e59a 00:16:24.808 11:31:09 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:25.070 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=18c13dfe-8167-4b2e-ae5d-c477ba45698b 00:16:25.070 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 18c13dfe-8167-4b2e-ae5d-c477ba45698b 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:25.331 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:25.592 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:25.592 { 00:16:25.592 "name": "894dd213-a8ff-4baa-8418-ef29e17a9093", 00:16:25.592 "aliases": [ 00:16:25.592 "lvs/nvme0n1p0" 00:16:25.592 ], 00:16:25.592 "product_name": "Logical Volume", 00:16:25.592 "block_size": 4096, 00:16:25.592 "num_blocks": 26476544, 00:16:25.592 "uuid": "894dd213-a8ff-4baa-8418-ef29e17a9093", 00:16:25.592 "assigned_rate_limits": { 00:16:25.592 "rw_ios_per_sec": 0, 00:16:25.592 "rw_mbytes_per_sec": 0, 00:16:25.592 "r_mbytes_per_sec": 0, 00:16:25.592 "w_mbytes_per_sec": 0 00:16:25.592 }, 00:16:25.592 "claimed": false, 00:16:25.592 "zoned": false, 00:16:25.592 "supported_io_types": { 00:16:25.592 "read": true, 00:16:25.592 "write": true, 00:16:25.592 "unmap": true, 00:16:25.592 "flush": false, 00:16:25.592 "reset": true, 00:16:25.592 "nvme_admin": false, 00:16:25.592 "nvme_io": false, 00:16:25.592 "nvme_io_md": false, 00:16:25.592 "write_zeroes": true, 00:16:25.592 "zcopy": false, 00:16:25.592 "get_zone_info": false, 00:16:25.592 "zone_management": false, 00:16:25.592 "zone_append": false, 00:16:25.592 "compare": false, 00:16:25.592 "compare_and_write": false, 00:16:25.592 "abort": false, 00:16:25.592 "seek_hole": true, 00:16:25.592 "seek_data": true, 00:16:25.592 "copy": false, 00:16:25.592 "nvme_iov_md": false 00:16:25.592 }, 00:16:25.592 "driver_specific": { 00:16:25.592 "lvol": { 00:16:25.592 "lvol_store_uuid": "18c13dfe-8167-4b2e-ae5d-c477ba45698b", 00:16:25.592 "base_bdev": "nvme0n1", 00:16:25.592 "thin_provision": true, 00:16:25.592 "num_allocated_clusters": 0, 00:16:25.592 "snapshot": false, 00:16:25.592 "clone": false, 00:16:25.592 "esnap_clone": false 00:16:25.592 } 00:16:25.592 } 00:16:25.592 } 00:16:25.592 ]' 00:16:25.592 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:25.592 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:25.592 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:25.592 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:25.592 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:25.592 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:25.592 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:25.592 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:25.592 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:25.854 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:25.854 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:25.854 11:31:10 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:25.854 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:25.854 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:25.854 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:25.854 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:25.854 11:31:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:26.114 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:26.114 { 00:16:26.114 "name": "894dd213-a8ff-4baa-8418-ef29e17a9093", 00:16:26.114 "aliases": [ 00:16:26.114 "lvs/nvme0n1p0" 00:16:26.114 ], 00:16:26.114 "product_name": "Logical Volume", 00:16:26.114 "block_size": 4096, 00:16:26.114 "num_blocks": 26476544, 00:16:26.114 "uuid": "894dd213-a8ff-4baa-8418-ef29e17a9093", 00:16:26.114 "assigned_rate_limits": { 00:16:26.114 "rw_ios_per_sec": 0, 00:16:26.114 "rw_mbytes_per_sec": 0, 00:16:26.114 "r_mbytes_per_sec": 0, 00:16:26.114 "w_mbytes_per_sec": 0 00:16:26.114 }, 00:16:26.114 "claimed": false, 00:16:26.114 "zoned": false, 00:16:26.114 "supported_io_types": { 00:16:26.114 "read": true, 00:16:26.114 "write": true, 00:16:26.114 "unmap": true, 00:16:26.114 "flush": false, 00:16:26.114 "reset": true, 00:16:26.114 "nvme_admin": false, 00:16:26.114 "nvme_io": false, 00:16:26.114 "nvme_io_md": false, 00:16:26.114 "write_zeroes": true, 00:16:26.114 "zcopy": false, 00:16:26.114 "get_zone_info": false, 00:16:26.114 "zone_management": false, 00:16:26.114 "zone_append": false, 00:16:26.114 "compare": false, 00:16:26.114 "compare_and_write": false, 00:16:26.114 "abort": false, 00:16:26.115 "seek_hole": true, 00:16:26.115 "seek_data": true, 00:16:26.115 "copy": false, 00:16:26.115 "nvme_iov_md": false 00:16:26.115 }, 00:16:26.115 "driver_specific": { 00:16:26.115 "lvol": { 00:16:26.115 "lvol_store_uuid": "18c13dfe-8167-4b2e-ae5d-c477ba45698b", 00:16:26.115 "base_bdev": "nvme0n1", 00:16:26.115 "thin_provision": true, 00:16:26.115 "num_allocated_clusters": 0, 00:16:26.115 "snapshot": false, 00:16:26.115 "clone": false, 00:16:26.115 "esnap_clone": false 00:16:26.115 } 00:16:26.115 } 00:16:26.115 } 00:16:26.115 ]' 00:16:26.115 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:26.115 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:26.115 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:26.115 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:26.115 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:26.115 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:26.115 11:31:11 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:26.115 11:31:11 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:26.376 11:31:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:26.376 11:31:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:26.376 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:26.376 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:26.376 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:26.376 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:26.376 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 894dd213-a8ff-4baa-8418-ef29e17a9093 00:16:26.376 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:26.376 { 00:16:26.376 "name": "894dd213-a8ff-4baa-8418-ef29e17a9093", 00:16:26.376 "aliases": [ 00:16:26.376 "lvs/nvme0n1p0" 00:16:26.376 ], 00:16:26.376 "product_name": "Logical Volume", 00:16:26.376 "block_size": 4096, 00:16:26.376 "num_blocks": 26476544, 00:16:26.376 "uuid": "894dd213-a8ff-4baa-8418-ef29e17a9093", 00:16:26.376 "assigned_rate_limits": { 00:16:26.376 "rw_ios_per_sec": 0, 00:16:26.377 "rw_mbytes_per_sec": 0, 00:16:26.377 "r_mbytes_per_sec": 0, 00:16:26.377 "w_mbytes_per_sec": 0 00:16:26.377 }, 00:16:26.377 "claimed": false, 00:16:26.377 "zoned": false, 00:16:26.377 "supported_io_types": { 00:16:26.377 "read": true, 00:16:26.377 "write": true, 00:16:26.377 "unmap": true, 00:16:26.377 "flush": false, 00:16:26.377 "reset": true, 00:16:26.377 "nvme_admin": false, 00:16:26.377 "nvme_io": false, 00:16:26.377 "nvme_io_md": false, 00:16:26.377 "write_zeroes": true, 00:16:26.377 "zcopy": false, 00:16:26.377 "get_zone_info": false, 00:16:26.377 "zone_management": false, 00:16:26.377 "zone_append": false, 00:16:26.377 "compare": false, 00:16:26.377 "compare_and_write": false, 00:16:26.377 "abort": false, 00:16:26.377 "seek_hole": true, 00:16:26.377 "seek_data": true, 00:16:26.377 "copy": false, 00:16:26.377 "nvme_iov_md": false 00:16:26.377 }, 00:16:26.377 "driver_specific": { 00:16:26.377 "lvol": { 00:16:26.377 "lvol_store_uuid": "18c13dfe-8167-4b2e-ae5d-c477ba45698b", 00:16:26.377 "base_bdev": "nvme0n1", 00:16:26.377 "thin_provision": true, 00:16:26.377 "num_allocated_clusters": 0, 00:16:26.377 "snapshot": false, 00:16:26.377 "clone": false, 00:16:26.377 "esnap_clone": false 00:16:26.377 } 00:16:26.377 } 00:16:26.377 } 00:16:26.377 ]' 00:16:26.377 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:26.639 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:26.639 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:26.639 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:26.639 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:26.639 11:31:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:26.639 11:31:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:26.639 11:31:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 894dd213-a8ff-4baa-8418-ef29e17a9093 -c nvc0n1p0 --l2p_dram_limit 20 00:16:26.639 [2024-10-27 11:31:11.891118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.639 [2024-10-27 11:31:11.891158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:26.639 [2024-10-27 11:31:11.891169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:26.639 [2024-10-27 11:31:11.891179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.639 [2024-10-27 11:31:11.891217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.639 [2024-10-27 11:31:11.891226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:26.639 [2024-10-27 11:31:11.891233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:26.639 [2024-10-27 11:31:11.891254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.639 [2024-10-27 11:31:11.891267] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:26.639 [2024-10-27 11:31:11.891869] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:26.639 [2024-10-27 11:31:11.891920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.639 [2024-10-27 11:31:11.891930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:26.639 [2024-10-27 11:31:11.891937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:16:26.639 [2024-10-27 11:31:11.891944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.639 [2024-10-27 11:31:11.892487] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 262ff6c6-e8d5-437f-b0ce-7df6865ca3d0 00:16:26.639 [2024-10-27 11:31:11.894474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.639 [2024-10-27 11:31:11.894563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:26.639 [2024-10-27 11:31:11.894602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:26.639 [2024-10-27 11:31:11.894636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.639 [2024-10-27 11:31:11.902464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.639 [2024-10-27 11:31:11.902787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:26.639 [2024-10-27 11:31:11.902856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.696 ms 00:16:26.639 [2024-10-27 11:31:11.902897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.640 [2024-10-27 11:31:11.903149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.640 [2024-10-27 11:31:11.903182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:26.640 [2024-10-27 11:31:11.903226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:16:26.640 [2024-10-27 11:31:11.903248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.640 [2024-10-27 11:31:11.903438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.640 [2024-10-27 11:31:11.903472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:26.640 [2024-10-27 11:31:11.903502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:26.640 [2024-10-27 11:31:11.903524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.640 [2024-10-27 11:31:11.903584] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:26.640 [2024-10-27 11:31:11.907689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.640 [2024-10-27 11:31:11.907717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:26.640 [2024-10-27 11:31:11.907726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.127 ms 00:16:26.640 [2024-10-27 11:31:11.907736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.640 [2024-10-27 11:31:11.907765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.640 [2024-10-27 11:31:11.907777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:26.640 [2024-10-27 11:31:11.907784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:26.640 [2024-10-27 11:31:11.907793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.640 [2024-10-27 11:31:11.907825] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:26.640 [2024-10-27 11:31:11.907960] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:26.640 [2024-10-27 11:31:11.907974] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:26.640 [2024-10-27 11:31:11.907987] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:26.640 [2024-10-27 11:31:11.907997] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:26.640 [2024-10-27 11:31:11.908008] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:26.640 [2024-10-27 11:31:11.908016] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:26.640 [2024-10-27 11:31:11.908025] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:26.640 [2024-10-27 11:31:11.908032] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:26.640 [2024-10-27 11:31:11.908041] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:26.640 [2024-10-27 11:31:11.908049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.640 [2024-10-27 11:31:11.908057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:26.640 [2024-10-27 11:31:11.908065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:16:26.640 [2024-10-27 11:31:11.908075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.640 [2024-10-27 11:31:11.908157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.640 [2024-10-27 11:31:11.908174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:26.640 [2024-10-27 11:31:11.908184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:26.640 [2024-10-27 11:31:11.908198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.640 [2024-10-27 11:31:11.908312] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:26.640 [2024-10-27 11:31:11.908336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:26.640 [2024-10-27 11:31:11.908345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:26.640 [2024-10-27 11:31:11.908359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:26.640 [2024-10-27 11:31:11.908389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:26.640 [2024-10-27 11:31:11.908407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:26.640 [2024-10-27 11:31:11.908423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:26.640 [2024-10-27 11:31:11.908439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:26.640 [2024-10-27 11:31:11.908447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:26.640 [2024-10-27 11:31:11.908454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:26.640 [2024-10-27 11:31:11.908468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:26.640 [2024-10-27 11:31:11.908475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:26.640 [2024-10-27 11:31:11.908486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:26.640 [2024-10-27 11:31:11.908505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:26.640 [2024-10-27 11:31:11.908517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:26.640 [2024-10-27 11:31:11.908538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.640 [2024-10-27 11:31:11.908557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:26.640 [2024-10-27 11:31:11.908570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.640 [2024-10-27 11:31:11.908592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:26.640 [2024-10-27 11:31:11.908599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.640 [2024-10-27 11:31:11.908614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:26.640 [2024-10-27 11:31:11.908623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.640 [2024-10-27 11:31:11.908641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:26.640 [2024-10-27 11:31:11.908650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:26.640 [2024-10-27 11:31:11.908675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:26.640 [2024-10-27 11:31:11.908688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:26.640 [2024-10-27 11:31:11.908695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:26.640 [2024-10-27 11:31:11.908703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:26.640 [2024-10-27 11:31:11.908714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:26.640 [2024-10-27 11:31:11.908725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:26.640 [2024-10-27 11:31:11.908742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:26.640 [2024-10-27 11:31:11.908751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908759] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:26.640 [2024-10-27 11:31:11.908771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:26.640 [2024-10-27 11:31:11.908782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:26.640 [2024-10-27 11:31:11.908789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.640 [2024-10-27 11:31:11.908800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:26.640 [2024-10-27 11:31:11.908808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:26.640 [2024-10-27 11:31:11.908816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:26.640 [2024-10-27 11:31:11.908823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:26.640 [2024-10-27 11:31:11.908831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:26.640 [2024-10-27 11:31:11.908840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:26.640 [2024-10-27 11:31:11.908856] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:26.640 [2024-10-27 11:31:11.908870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:26.640 [2024-10-27 11:31:11.908881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:26.640 [2024-10-27 11:31:11.908893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:26.640 [2024-10-27 11:31:11.908902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:26.640 [2024-10-27 11:31:11.908911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:26.640 [2024-10-27 11:31:11.908925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:26.640 [2024-10-27 11:31:11.908937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:26.640 [2024-10-27 11:31:11.908946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:26.640 [2024-10-27 11:31:11.908953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:26.640 [2024-10-27 11:31:11.908963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:26.640 [2024-10-27 11:31:11.908973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:26.640 [2024-10-27 11:31:11.908985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:26.640 [2024-10-27 11:31:11.908997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:26.640 [2024-10-27 11:31:11.909008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:26.640 [2024-10-27 11:31:11.909016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:26.640 [2024-10-27 11:31:11.909031] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:26.640 [2024-10-27 11:31:11.909039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:26.641 [2024-10-27 11:31:11.909050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:26.641 [2024-10-27 11:31:11.909059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:26.641 [2024-10-27 11:31:11.909071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:26.641 [2024-10-27 11:31:11.909079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:26.641 [2024-10-27 11:31:11.909087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.641 [2024-10-27 11:31:11.909095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:26.641 [2024-10-27 11:31:11.909105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:16:26.641 [2024-10-27 11:31:11.909116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.641 [2024-10-27 11:31:11.909157] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:26.641 [2024-10-27 11:31:11.909173] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:30.846 [2024-10-27 11:31:15.726149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.846 [2024-10-27 11:31:15.726512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:30.846 [2024-10-27 11:31:15.726548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3816.961 ms 00:16:30.846 [2024-10-27 11:31:15.726562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.846 [2024-10-27 11:31:15.758179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.846 [2024-10-27 11:31:15.758401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:30.846 [2024-10-27 11:31:15.758432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.367 ms 00:16:30.846 [2024-10-27 11:31:15.758442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.846 [2024-10-27 11:31:15.758585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.846 [2024-10-27 11:31:15.758598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:30.846 [2024-10-27 11:31:15.758613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:30.846 [2024-10-27 11:31:15.758621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.846 [2024-10-27 11:31:15.809445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.846 [2024-10-27 11:31:15.809665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:30.846 [2024-10-27 11:31:15.809695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.784 ms 00:16:30.846 [2024-10-27 11:31:15.809705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.846 [2024-10-27 11:31:15.809749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.846 [2024-10-27 11:31:15.809760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:30.846 [2024-10-27 11:31:15.809771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:30.846 [2024-10-27 11:31:15.809782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.846 [2024-10-27 11:31:15.810383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.846 [2024-10-27 11:31:15.810419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:30.846 [2024-10-27 11:31:15.810435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:16:30.846 [2024-10-27 11:31:15.810445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.846 [2024-10-27 11:31:15.810566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.846 [2024-10-27 11:31:15.810576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:30.846 [2024-10-27 11:31:15.810589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:16:30.846 [2024-10-27 11:31:15.810598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.846 [2024-10-27 11:31:15.826342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.846 [2024-10-27 11:31:15.826387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:30.846 [2024-10-27 11:31:15.826402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.723 ms 00:16:30.846 [2024-10-27 11:31:15.826410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.847 [2024-10-27 11:31:15.839585] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:30.847 [2024-10-27 11:31:15.846679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.847 [2024-10-27 11:31:15.846730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:30.847 [2024-10-27 11:31:15.846742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.186 ms 00:16:30.847 [2024-10-27 11:31:15.846753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.847 [2024-10-27 11:31:15.946350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.847 [2024-10-27 11:31:15.946419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:30.847 [2024-10-27 11:31:15.946434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.567 ms 00:16:30.847 [2024-10-27 11:31:15.946445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.847 [2024-10-27 11:31:15.946645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.847 [2024-10-27 11:31:15.946663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:30.847 [2024-10-27 11:31:15.946673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:16:30.847 [2024-10-27 11:31:15.946684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.847 [2024-10-27 11:31:15.972716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.847 [2024-10-27 11:31:15.972968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:30.847 [2024-10-27 11:31:15.972992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.982 ms 00:16:30.847 [2024-10-27 11:31:15.973003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.847 [2024-10-27 11:31:15.998452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.847 [2024-10-27 11:31:15.998503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:30.847 [2024-10-27 11:31:15.998515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.333 ms 00:16:30.847 [2024-10-27 11:31:15.998525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.847 [2024-10-27 11:31:15.999136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.847 [2024-10-27 11:31:15.999163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:30.847 [2024-10-27 11:31:15.999175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:16:30.847 [2024-10-27 11:31:15.999186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.847 [2024-10-27 11:31:16.086661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.847 [2024-10-27 11:31:16.086723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:30.847 [2024-10-27 11:31:16.086736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.437 ms 00:16:30.847 [2024-10-27 11:31:16.086747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.847 [2024-10-27 11:31:16.114460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.847 [2024-10-27 11:31:16.114510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:30.847 [2024-10-27 11:31:16.114523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.624 ms 00:16:30.847 [2024-10-27 11:31:16.114534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.108 [2024-10-27 11:31:16.140383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.108 [2024-10-27 11:31:16.140441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:31.108 [2024-10-27 11:31:16.140454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.800 ms 00:16:31.108 [2024-10-27 11:31:16.140465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.108 [2024-10-27 11:31:16.166850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.108 [2024-10-27 11:31:16.166903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:31.108 [2024-10-27 11:31:16.166915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.340 ms 00:16:31.108 [2024-10-27 11:31:16.166926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.108 [2024-10-27 11:31:16.166976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.108 [2024-10-27 11:31:16.166994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:31.108 [2024-10-27 11:31:16.167003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:31.108 [2024-10-27 11:31:16.167014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.108 [2024-10-27 11:31:16.167117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.108 [2024-10-27 11:31:16.167132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:31.108 [2024-10-27 11:31:16.167140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:31.108 [2024-10-27 11:31:16.167151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.108 [2024-10-27 11:31:16.168798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4277.122 ms, result 0 00:16:31.108 { 00:16:31.108 "name": "ftl0", 00:16:31.108 "uuid": "262ff6c6-e8d5-437f-b0ce-7df6865ca3d0" 00:16:31.108 } 00:16:31.108 11:31:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:31.108 11:31:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:31.108 11:31:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:31.370 11:31:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:31.370 [2024-10-27 11:31:16.500475] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:31.370 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:31.370 Zero copy mechanism will not be used. 00:16:31.370 Running I/O for 4 seconds... 00:16:33.257 754.00 IOPS, 50.07 MiB/s [2024-10-27T11:31:19.925Z] 744.00 IOPS, 49.41 MiB/s [2024-10-27T11:31:20.867Z] 942.33 IOPS, 62.58 MiB/s [2024-10-27T11:31:20.867Z] 926.75 IOPS, 61.54 MiB/s 00:16:35.586 Latency(us) 00:16:35.586 [2024-10-27T11:31:20.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.587 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:35.587 ftl0 : 4.00 926.48 61.52 0.00 0.00 1140.65 185.11 3125.56 00:16:35.587 [2024-10-27T11:31:20.868Z] =================================================================================================================== 00:16:35.587 [2024-10-27T11:31:20.868Z] Total : 926.48 61.52 0.00 0.00 1140.65 185.11 3125.56 00:16:35.587 { 00:16:35.587 "results": [ 00:16:35.587 { 00:16:35.587 "job": "ftl0", 00:16:35.587 "core_mask": "0x1", 00:16:35.587 "workload": "randwrite", 00:16:35.587 "status": "finished", 00:16:35.587 "queue_depth": 1, 00:16:35.587 "io_size": 69632, 00:16:35.587 "runtime": 4.00225, 00:16:35.587 "iops": 926.4788556437004, 00:16:35.587 "mibps": 61.52398650758948, 00:16:35.587 "io_failed": 0, 00:16:35.587 "io_timeout": 0, 00:16:35.587 "avg_latency_us": 1140.6482449589246, 00:16:35.587 "min_latency_us": 185.1076923076923, 00:16:35.587 "max_latency_us": 3125.563076923077 00:16:35.587 } 00:16:35.587 ], 00:16:35.587 "core_count": 1 00:16:35.587 } 00:16:35.587 [2024-10-27 11:31:20.511199] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:35.587 11:31:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:35.587 [2024-10-27 11:31:20.600339] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:35.587 Running I/O for 4 seconds... 00:16:37.477 8972.00 IOPS, 35.05 MiB/s [2024-10-27T11:31:23.699Z] 8029.50 IOPS, 31.37 MiB/s [2024-10-27T11:31:24.691Z] 7257.33 IOPS, 28.35 MiB/s [2024-10-27T11:31:24.691Z] 7000.50 IOPS, 27.35 MiB/s 00:16:39.410 Latency(us) 00:16:39.410 [2024-10-27T11:31:24.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.410 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.410 ftl0 : 4.04 6969.86 27.23 0.00 0.00 18286.20 245.76 47790.87 00:16:39.410 [2024-10-27T11:31:24.691Z] =================================================================================================================== 00:16:39.410 [2024-10-27T11:31:24.691Z] Total : 6969.86 27.23 0.00 0.00 18286.20 0.00 47790.87 00:16:39.410 [2024-10-27 11:31:24.644435] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:39.410 { 00:16:39.410 "results": [ 00:16:39.410 { 00:16:39.410 "job": "ftl0", 00:16:39.410 "core_mask": "0x1", 00:16:39.410 "workload": "randwrite", 00:16:39.410 "status": "finished", 00:16:39.410 "queue_depth": 128, 00:16:39.410 "io_size": 4096, 00:16:39.410 "runtime": 4.035947, 00:16:39.410 "iops": 6969.863578486041, 00:16:39.410 "mibps": 27.226029603461097, 00:16:39.410 "io_failed": 0, 00:16:39.410 "io_timeout": 0, 00:16:39.410 "avg_latency_us": 18286.198405862888, 00:16:39.410 "min_latency_us": 245.76, 00:16:39.410 "max_latency_us": 47790.86769230769 00:16:39.410 } 00:16:39.410 ], 00:16:39.410 "core_count": 1 00:16:39.410 } 00:16:39.410 11:31:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:39.670 [2024-10-27 11:31:24.761245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:39.670 Running I/O for 4 seconds... 00:16:41.554 5178.00 IOPS, 20.23 MiB/s [2024-10-27T11:31:27.777Z] 5405.00 IOPS, 21.11 MiB/s [2024-10-27T11:31:29.164Z] 5719.67 IOPS, 22.34 MiB/s [2024-10-27T11:31:29.164Z] 5455.00 IOPS, 21.31 MiB/s 00:16:43.883 Latency(us) 00:16:43.883 [2024-10-27T11:31:29.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.883 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:43.883 Verification LBA range: start 0x0 length 0x1400000 00:16:43.883 ftl0 : 4.02 5462.30 21.34 0.00 0.00 23358.45 278.84 38515.00 00:16:43.883 [2024-10-27T11:31:29.164Z] =================================================================================================================== 00:16:43.883 [2024-10-27T11:31:29.164Z] Total : 5462.30 21.34 0.00 0.00 23358.45 0.00 38515.00 00:16:43.883 [2024-10-27 11:31:28.796153] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:43.883 { 00:16:43.883 "results": [ 00:16:43.883 { 00:16:43.883 "job": "ftl0", 00:16:43.883 "core_mask": "0x1", 00:16:43.883 "workload": "verify", 00:16:43.883 "status": "finished", 00:16:43.883 "verify_range": { 00:16:43.883 "start": 0, 00:16:43.883 "length": 20971520 00:16:43.883 }, 00:16:43.883 "queue_depth": 128, 00:16:43.883 "io_size": 4096, 00:16:43.883 "runtime": 4.018085, 00:16:43.883 "iops": 5462.303560029218, 00:16:43.883 "mibps": 21.337123281364132, 00:16:43.883 "io_failed": 0, 00:16:43.883 "io_timeout": 0, 00:16:43.883 "avg_latency_us": 23358.450419593166, 00:16:43.883 "min_latency_us": 278.8430769230769, 00:16:43.883 "max_latency_us": 38515.00307692308 00:16:43.883 } 00:16:43.883 ], 00:16:43.883 "core_count": 1 00:16:43.883 } 00:16:43.883 11:31:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:43.883 [2024-10-27 11:31:29.015012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.883 [2024-10-27 11:31:29.015229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:43.883 [2024-10-27 11:31:29.015252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:43.883 [2024-10-27 11:31:29.015268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.883 [2024-10-27 11:31:29.015316] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:43.883 [2024-10-27 11:31:29.018285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.883 [2024-10-27 11:31:29.018336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:43.883 [2024-10-27 11:31:29.018351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:16:43.883 [2024-10-27 11:31:29.018361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.883 [2024-10-27 11:31:29.021197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.883 [2024-10-27 11:31:29.021245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:43.883 [2024-10-27 11:31:29.021259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.799 ms 00:16:43.883 [2024-10-27 11:31:29.021268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.145 [2024-10-27 11:31:29.252780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.145 [2024-10-27 11:31:29.252994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:44.145 [2024-10-27 11:31:29.253024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 231.486 ms 00:16:44.145 [2024-10-27 11:31:29.253033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.145 [2024-10-27 11:31:29.259339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.145 [2024-10-27 11:31:29.259490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:44.145 [2024-10-27 11:31:29.259514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.259 ms 00:16:44.145 [2024-10-27 11:31:29.259523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.145 [2024-10-27 11:31:29.285629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.145 [2024-10-27 11:31:29.285689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:44.145 [2024-10-27 11:31:29.285705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.031 ms 00:16:44.145 [2024-10-27 11:31:29.285714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.145 [2024-10-27 11:31:29.302493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.145 [2024-10-27 11:31:29.302673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:44.145 [2024-10-27 11:31:29.302703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.728 ms 00:16:44.145 [2024-10-27 11:31:29.302715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.145 [2024-10-27 11:31:29.302939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.145 [2024-10-27 11:31:29.302963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:44.145 [2024-10-27 11:31:29.302979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:16:44.145 [2024-10-27 11:31:29.302987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.145 [2024-10-27 11:31:29.328779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.145 [2024-10-27 11:31:29.328824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:44.145 [2024-10-27 11:31:29.328839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.770 ms 00:16:44.145 [2024-10-27 11:31:29.328846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.145 [2024-10-27 11:31:29.353076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.145 [2024-10-27 11:31:29.353247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:44.145 [2024-10-27 11:31:29.353270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.182 ms 00:16:44.146 [2024-10-27 11:31:29.353277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.146 [2024-10-27 11:31:29.377797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.146 [2024-10-27 11:31:29.377850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:44.146 [2024-10-27 11:31:29.377867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.205 ms 00:16:44.146 [2024-10-27 11:31:29.377875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.146 [2024-10-27 11:31:29.402076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.146 [2024-10-27 11:31:29.402121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:44.146 [2024-10-27 11:31:29.402139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.109 ms 00:16:44.146 [2024-10-27 11:31:29.402146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.146 [2024-10-27 11:31:29.402190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:44.146 [2024-10-27 11:31:29.402206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.402994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:44.146 [2024-10-27 11:31:29.403003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:44.147 [2024-10-27 11:31:29.403142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:44.147 [2024-10-27 11:31:29.403152] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 262ff6c6-e8d5-437f-b0ce-7df6865ca3d0 00:16:44.147 [2024-10-27 11:31:29.403162] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:44.147 [2024-10-27 11:31:29.403171] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:44.147 [2024-10-27 11:31:29.403179] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:44.147 [2024-10-27 11:31:29.403189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:44.147 [2024-10-27 11:31:29.403198] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:44.147 [2024-10-27 11:31:29.403208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:44.147 [2024-10-27 11:31:29.403215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:44.147 [2024-10-27 11:31:29.403226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:44.147 [2024-10-27 11:31:29.403232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:44.147 [2024-10-27 11:31:29.403241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.147 [2024-10-27 11:31:29.403249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:44.147 [2024-10-27 11:31:29.403259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:16:44.147 [2024-10-27 11:31:29.403266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.147 [2024-10-27 11:31:29.416806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.147 [2024-10-27 11:31:29.416848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:44.147 [2024-10-27 11:31:29.416865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.486 ms 00:16:44.147 [2024-10-27 11:31:29.416873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.147 [2024-10-27 11:31:29.417255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.147 [2024-10-27 11:31:29.417264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:44.147 [2024-10-27 11:31:29.417276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:16:44.147 [2024-10-27 11:31:29.417283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.409 [2024-10-27 11:31:29.456177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.409 [2024-10-27 11:31:29.456399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.409 [2024-10-27 11:31:29.456426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.409 [2024-10-27 11:31:29.456435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.409 [2024-10-27 11:31:29.456520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.409 [2024-10-27 11:31:29.456530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.409 [2024-10-27 11:31:29.456540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.409 [2024-10-27 11:31:29.456548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.409 [2024-10-27 11:31:29.456652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.409 [2024-10-27 11:31:29.456664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.409 [2024-10-27 11:31:29.456678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.409 [2024-10-27 11:31:29.456686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.409 [2024-10-27 11:31:29.456704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.409 [2024-10-27 11:31:29.456712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.409 [2024-10-27 11:31:29.456722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.409 [2024-10-27 11:31:29.456729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.409 [2024-10-27 11:31:29.540554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.409 [2024-10-27 11:31:29.540608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.409 [2024-10-27 11:31:29.540628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.409 [2024-10-27 11:31:29.540636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.410 [2024-10-27 11:31:29.609622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.410 [2024-10-27 11:31:29.609674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.410 [2024-10-27 11:31:29.609688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.410 [2024-10-27 11:31:29.609697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.410 [2024-10-27 11:31:29.609782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.410 [2024-10-27 11:31:29.609792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.410 [2024-10-27 11:31:29.609804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.410 [2024-10-27 11:31:29.609816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.410 [2024-10-27 11:31:29.609883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.410 [2024-10-27 11:31:29.609894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.410 [2024-10-27 11:31:29.609904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.410 [2024-10-27 11:31:29.609913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.410 [2024-10-27 11:31:29.610010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.410 [2024-10-27 11:31:29.610020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.410 [2024-10-27 11:31:29.610033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.410 [2024-10-27 11:31:29.610043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.410 [2024-10-27 11:31:29.610080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.410 [2024-10-27 11:31:29.610089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:44.410 [2024-10-27 11:31:29.610100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.410 [2024-10-27 11:31:29.610109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.410 [2024-10-27 11:31:29.610152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.410 [2024-10-27 11:31:29.610162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.410 [2024-10-27 11:31:29.610172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.410 [2024-10-27 11:31:29.610181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.410 [2024-10-27 11:31:29.610234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.410 [2024-10-27 11:31:29.610252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.410 [2024-10-27 11:31:29.610264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.410 [2024-10-27 11:31:29.610272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.410 [2024-10-27 11:31:29.610449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 595.383 ms, result 0 00:16:44.410 true 00:16:44.410 11:31:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73042 00:16:44.410 11:31:29 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73042 ']' 00:16:44.410 11:31:29 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73042 00:16:44.410 11:31:29 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:44.410 11:31:29 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.410 11:31:29 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73042 00:16:44.410 killing process with pid 73042 00:16:44.410 Received shutdown signal, test time was about 4.000000 seconds 00:16:44.410 00:16:44.410 Latency(us) 00:16:44.410 [2024-10-27T11:31:29.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.410 [2024-10-27T11:31:29.691Z] =================================================================================================================== 00:16:44.410 [2024-10-27T11:31:29.691Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.410 11:31:29 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.410 11:31:29 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.410 11:31:29 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73042' 00:16:44.410 11:31:29 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73042 00:16:44.410 11:31:29 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73042 00:16:47.715 Remove shared memory files 00:16:47.715 11:31:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:47.715 11:31:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:47.715 11:31:32 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:47.715 11:31:32 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:47.715 11:31:32 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:47.715 11:31:32 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:47.715 11:31:32 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:47.715 11:31:32 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:47.715 ************************************ 00:16:47.715 END TEST ftl_bdevperf 00:16:47.715 ************************************ 00:16:47.715 00:16:47.715 real 0m24.534s 00:16:47.715 user 0m27.076s 00:16:47.715 sys 0m1.054s 00:16:47.715 11:31:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:47.715 11:31:32 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:47.715 11:31:32 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:47.715 11:31:32 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:47.715 11:31:32 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.715 11:31:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:47.715 ************************************ 00:16:47.715 START TEST ftl_trim 00:16:47.715 ************************************ 00:16:47.715 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:47.715 * Looking for test storage... 00:16:47.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:47.715 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:47.715 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # lcov --version 00:16:47.715 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:47.715 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:47.715 11:31:32 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:47.716 11:31:32 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.716 11:31:32 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:47.716 11:31:32 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.716 11:31:32 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.716 11:31:32 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.716 11:31:32 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:47.716 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.716 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:47.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.716 --rc genhtml_branch_coverage=1 00:16:47.716 --rc genhtml_function_coverage=1 00:16:47.716 --rc genhtml_legend=1 00:16:47.716 --rc geninfo_all_blocks=1 00:16:47.716 --rc geninfo_unexecuted_blocks=1 00:16:47.716 00:16:47.716 ' 00:16:47.716 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:47.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.716 --rc genhtml_branch_coverage=1 00:16:47.716 --rc genhtml_function_coverage=1 00:16:47.716 --rc genhtml_legend=1 00:16:47.716 --rc geninfo_all_blocks=1 00:16:47.716 --rc geninfo_unexecuted_blocks=1 00:16:47.716 00:16:47.716 ' 00:16:47.716 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:47.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.716 --rc genhtml_branch_coverage=1 00:16:47.716 --rc genhtml_function_coverage=1 00:16:47.716 --rc genhtml_legend=1 00:16:47.716 --rc geninfo_all_blocks=1 00:16:47.716 --rc geninfo_unexecuted_blocks=1 00:16:47.716 00:16:47.716 ' 00:16:47.716 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:47.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.716 --rc genhtml_branch_coverage=1 00:16:47.716 --rc genhtml_function_coverage=1 00:16:47.716 --rc genhtml_legend=1 00:16:47.716 --rc geninfo_all_blocks=1 00:16:47.716 --rc geninfo_unexecuted_blocks=1 00:16:47.716 00:16:47.716 ' 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73404 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73404 00:16:47.716 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73404 ']' 00:16:47.716 11:31:32 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:47.716 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.716 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.716 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.716 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.716 11:31:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:47.716 [2024-10-27 11:31:32.685258] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:16:47.716 [2024-10-27 11:31:32.685421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73404 ] 00:16:47.716 [2024-10-27 11:31:32.849100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:47.716 [2024-10-27 11:31:32.971075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.716 [2024-10-27 11:31:32.971774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.716 [2024-10-27 11:31:32.971857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.660 11:31:33 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.660 11:31:33 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:48.660 11:31:33 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:48.660 11:31:33 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:48.660 11:31:33 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:48.660 11:31:33 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:48.660 11:31:33 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:48.660 11:31:33 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:48.922 11:31:33 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:48.922 11:31:33 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:48.922 11:31:33 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:48.922 11:31:33 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:48.922 11:31:33 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:48.922 11:31:33 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:48.922 11:31:33 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:48.922 11:31:33 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:48.922 11:31:34 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:48.922 { 00:16:48.922 "name": "nvme0n1", 00:16:48.922 "aliases": [ 00:16:48.922 "d5719b57-6a67-4d06-811a-7c6ea0a6f7ed" 00:16:48.922 ], 00:16:48.922 "product_name": "NVMe disk", 00:16:48.922 "block_size": 4096, 00:16:48.922 "num_blocks": 1310720, 00:16:48.922 "uuid": "d5719b57-6a67-4d06-811a-7c6ea0a6f7ed", 00:16:48.922 "numa_id": -1, 00:16:48.922 "assigned_rate_limits": { 00:16:48.922 "rw_ios_per_sec": 0, 00:16:48.922 "rw_mbytes_per_sec": 0, 00:16:48.922 "r_mbytes_per_sec": 0, 00:16:48.922 "w_mbytes_per_sec": 0 00:16:48.922 }, 00:16:48.922 "claimed": true, 00:16:48.922 "claim_type": "read_many_write_one", 00:16:48.922 "zoned": false, 00:16:48.922 "supported_io_types": { 00:16:48.922 "read": true, 00:16:48.922 "write": true, 00:16:48.922 "unmap": true, 00:16:48.922 "flush": true, 00:16:48.922 "reset": true, 00:16:48.922 "nvme_admin": true, 00:16:48.922 "nvme_io": true, 00:16:48.922 "nvme_io_md": false, 00:16:48.922 "write_zeroes": true, 00:16:48.922 "zcopy": false, 00:16:48.922 "get_zone_info": false, 00:16:48.922 "zone_management": false, 00:16:48.922 "zone_append": false, 00:16:48.922 "compare": true, 00:16:48.922 "compare_and_write": false, 00:16:48.922 "abort": true, 00:16:48.922 "seek_hole": false, 00:16:48.922 "seek_data": false, 00:16:48.922 "copy": true, 00:16:48.922 "nvme_iov_md": false 00:16:48.922 }, 00:16:48.922 "driver_specific": { 00:16:48.922 "nvme": [ 00:16:48.922 { 00:16:48.922 "pci_address": "0000:00:11.0", 00:16:48.922 "trid": { 00:16:48.922 "trtype": "PCIe", 00:16:48.922 "traddr": "0000:00:11.0" 00:16:48.922 }, 00:16:48.922 "ctrlr_data": { 00:16:48.922 "cntlid": 0, 00:16:48.922 "vendor_id": "0x1b36", 00:16:48.922 "model_number": "QEMU NVMe Ctrl", 00:16:48.922 "serial_number": "12341", 00:16:48.922 "firmware_revision": "8.0.0", 00:16:48.922 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:48.922 "oacs": { 00:16:48.922 "security": 0, 00:16:48.922 "format": 1, 00:16:48.922 "firmware": 0, 00:16:48.922 "ns_manage": 1 00:16:48.922 }, 00:16:48.922 "multi_ctrlr": false, 00:16:48.922 "ana_reporting": false 00:16:48.922 }, 00:16:48.922 "vs": { 00:16:48.922 "nvme_version": "1.4" 00:16:48.922 }, 00:16:48.922 "ns_data": { 00:16:48.922 "id": 1, 00:16:48.922 "can_share": false 00:16:48.922 } 00:16:48.922 } 00:16:48.922 ], 00:16:48.922 "mp_policy": "active_passive" 00:16:48.922 } 00:16:48.922 } 00:16:48.922 ]' 00:16:48.922 11:31:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:49.183 11:31:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:49.183 11:31:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:49.183 11:31:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:49.183 11:31:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:49.183 11:31:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:49.183 11:31:34 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:49.183 11:31:34 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:49.183 11:31:34 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:49.183 11:31:34 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:49.183 11:31:34 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:49.183 11:31:34 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=18c13dfe-8167-4b2e-ae5d-c477ba45698b 00:16:49.183 11:31:34 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:49.183 11:31:34 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 18c13dfe-8167-4b2e-ae5d-c477ba45698b 00:16:49.455 11:31:34 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:49.722 11:31:34 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=72335406-f125-4e5a-9fea-98deb3430f78 00:16:49.722 11:31:34 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 72335406-f125-4e5a-9fea-98deb3430f78 00:16:49.983 11:31:35 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=921b0f12-90ce-451c-af68-3191a383f917 00:16:49.983 11:31:35 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 921b0f12-90ce-451c-af68-3191a383f917 00:16:49.983 11:31:35 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:49.983 11:31:35 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:49.983 11:31:35 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=921b0f12-90ce-451c-af68-3191a383f917 00:16:49.983 11:31:35 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:49.983 11:31:35 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 921b0f12-90ce-451c-af68-3191a383f917 00:16:49.983 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=921b0f12-90ce-451c-af68-3191a383f917 00:16:49.983 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:49.983 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:49.983 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:49.983 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 921b0f12-90ce-451c-af68-3191a383f917 00:16:50.244 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:50.244 { 00:16:50.244 "name": "921b0f12-90ce-451c-af68-3191a383f917", 00:16:50.244 "aliases": [ 00:16:50.244 "lvs/nvme0n1p0" 00:16:50.244 ], 00:16:50.244 "product_name": "Logical Volume", 00:16:50.244 "block_size": 4096, 00:16:50.244 "num_blocks": 26476544, 00:16:50.244 "uuid": "921b0f12-90ce-451c-af68-3191a383f917", 00:16:50.244 "assigned_rate_limits": { 00:16:50.244 "rw_ios_per_sec": 0, 00:16:50.244 "rw_mbytes_per_sec": 0, 00:16:50.244 "r_mbytes_per_sec": 0, 00:16:50.244 "w_mbytes_per_sec": 0 00:16:50.244 }, 00:16:50.244 "claimed": false, 00:16:50.244 "zoned": false, 00:16:50.244 "supported_io_types": { 00:16:50.244 "read": true, 00:16:50.244 "write": true, 00:16:50.244 "unmap": true, 00:16:50.244 "flush": false, 00:16:50.244 "reset": true, 00:16:50.244 "nvme_admin": false, 00:16:50.244 "nvme_io": false, 00:16:50.244 "nvme_io_md": false, 00:16:50.244 "write_zeroes": true, 00:16:50.244 "zcopy": false, 00:16:50.244 "get_zone_info": false, 00:16:50.244 "zone_management": false, 00:16:50.244 "zone_append": false, 00:16:50.244 "compare": false, 00:16:50.244 "compare_and_write": false, 00:16:50.244 "abort": false, 00:16:50.244 "seek_hole": true, 00:16:50.244 "seek_data": true, 00:16:50.244 "copy": false, 00:16:50.244 "nvme_iov_md": false 00:16:50.244 }, 00:16:50.244 "driver_specific": { 00:16:50.244 "lvol": { 00:16:50.244 "lvol_store_uuid": "72335406-f125-4e5a-9fea-98deb3430f78", 00:16:50.244 "base_bdev": "nvme0n1", 00:16:50.244 "thin_provision": true, 00:16:50.244 "num_allocated_clusters": 0, 00:16:50.244 "snapshot": false, 00:16:50.244 "clone": false, 00:16:50.244 "esnap_clone": false 00:16:50.244 } 00:16:50.244 } 00:16:50.244 } 00:16:50.244 ]' 00:16:50.244 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:50.244 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:50.244 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:50.244 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:50.244 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:50.244 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:50.244 11:31:35 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:50.244 11:31:35 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:50.244 11:31:35 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:50.505 11:31:35 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:50.505 11:31:35 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:50.505 11:31:35 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 921b0f12-90ce-451c-af68-3191a383f917 00:16:50.505 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=921b0f12-90ce-451c-af68-3191a383f917 00:16:50.505 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:50.505 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:50.505 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:50.505 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 921b0f12-90ce-451c-af68-3191a383f917 00:16:50.792 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:50.792 { 00:16:50.792 "name": "921b0f12-90ce-451c-af68-3191a383f917", 00:16:50.792 "aliases": [ 00:16:50.792 "lvs/nvme0n1p0" 00:16:50.792 ], 00:16:50.792 "product_name": "Logical Volume", 00:16:50.792 "block_size": 4096, 00:16:50.792 "num_blocks": 26476544, 00:16:50.792 "uuid": "921b0f12-90ce-451c-af68-3191a383f917", 00:16:50.792 "assigned_rate_limits": { 00:16:50.792 "rw_ios_per_sec": 0, 00:16:50.792 "rw_mbytes_per_sec": 0, 00:16:50.792 "r_mbytes_per_sec": 0, 00:16:50.792 "w_mbytes_per_sec": 0 00:16:50.792 }, 00:16:50.792 "claimed": false, 00:16:50.792 "zoned": false, 00:16:50.792 "supported_io_types": { 00:16:50.792 "read": true, 00:16:50.792 "write": true, 00:16:50.792 "unmap": true, 00:16:50.792 "flush": false, 00:16:50.792 "reset": true, 00:16:50.792 "nvme_admin": false, 00:16:50.792 "nvme_io": false, 00:16:50.792 "nvme_io_md": false, 00:16:50.792 "write_zeroes": true, 00:16:50.792 "zcopy": false, 00:16:50.792 "get_zone_info": false, 00:16:50.792 "zone_management": false, 00:16:50.792 "zone_append": false, 00:16:50.792 "compare": false, 00:16:50.792 "compare_and_write": false, 00:16:50.792 "abort": false, 00:16:50.792 "seek_hole": true, 00:16:50.792 "seek_data": true, 00:16:50.792 "copy": false, 00:16:50.792 "nvme_iov_md": false 00:16:50.792 }, 00:16:50.792 "driver_specific": { 00:16:50.792 "lvol": { 00:16:50.792 "lvol_store_uuid": "72335406-f125-4e5a-9fea-98deb3430f78", 00:16:50.792 "base_bdev": "nvme0n1", 00:16:50.792 "thin_provision": true, 00:16:50.792 "num_allocated_clusters": 0, 00:16:50.792 "snapshot": false, 00:16:50.792 "clone": false, 00:16:50.792 "esnap_clone": false 00:16:50.792 } 00:16:50.792 } 00:16:50.792 } 00:16:50.792 ]' 00:16:50.792 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:50.792 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:50.792 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:50.792 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:50.792 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:50.792 11:31:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:50.792 11:31:35 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:50.792 11:31:35 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:51.053 11:31:36 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:51.053 11:31:36 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:51.053 11:31:36 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 921b0f12-90ce-451c-af68-3191a383f917 00:16:51.053 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=921b0f12-90ce-451c-af68-3191a383f917 00:16:51.053 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:51.053 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:51.053 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:51.053 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 921b0f12-90ce-451c-af68-3191a383f917 00:16:51.053 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:51.053 { 00:16:51.053 "name": "921b0f12-90ce-451c-af68-3191a383f917", 00:16:51.053 "aliases": [ 00:16:51.053 "lvs/nvme0n1p0" 00:16:51.053 ], 00:16:51.053 "product_name": "Logical Volume", 00:16:51.053 "block_size": 4096, 00:16:51.053 "num_blocks": 26476544, 00:16:51.053 "uuid": "921b0f12-90ce-451c-af68-3191a383f917", 00:16:51.053 "assigned_rate_limits": { 00:16:51.053 "rw_ios_per_sec": 0, 00:16:51.053 "rw_mbytes_per_sec": 0, 00:16:51.053 "r_mbytes_per_sec": 0, 00:16:51.053 "w_mbytes_per_sec": 0 00:16:51.053 }, 00:16:51.053 "claimed": false, 00:16:51.053 "zoned": false, 00:16:51.053 "supported_io_types": { 00:16:51.053 "read": true, 00:16:51.053 "write": true, 00:16:51.053 "unmap": true, 00:16:51.053 "flush": false, 00:16:51.053 "reset": true, 00:16:51.054 "nvme_admin": false, 00:16:51.054 "nvme_io": false, 00:16:51.054 "nvme_io_md": false, 00:16:51.054 "write_zeroes": true, 00:16:51.054 "zcopy": false, 00:16:51.054 "get_zone_info": false, 00:16:51.054 "zone_management": false, 00:16:51.054 "zone_append": false, 00:16:51.054 "compare": false, 00:16:51.054 "compare_and_write": false, 00:16:51.054 "abort": false, 00:16:51.054 "seek_hole": true, 00:16:51.054 "seek_data": true, 00:16:51.054 "copy": false, 00:16:51.054 "nvme_iov_md": false 00:16:51.054 }, 00:16:51.054 "driver_specific": { 00:16:51.054 "lvol": { 00:16:51.054 "lvol_store_uuid": "72335406-f125-4e5a-9fea-98deb3430f78", 00:16:51.054 "base_bdev": "nvme0n1", 00:16:51.054 "thin_provision": true, 00:16:51.054 "num_allocated_clusters": 0, 00:16:51.054 "snapshot": false, 00:16:51.054 "clone": false, 00:16:51.054 "esnap_clone": false 00:16:51.054 } 00:16:51.054 } 00:16:51.054 } 00:16:51.054 ]' 00:16:51.054 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:51.054 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:51.054 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:51.315 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:51.315 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:51.315 11:31:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:51.315 11:31:36 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:51.316 11:31:36 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 921b0f12-90ce-451c-af68-3191a383f917 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:51.316 [2024-10-27 11:31:36.523917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.316 [2024-10-27 11:31:36.523966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:51.316 [2024-10-27 11:31:36.523984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:51.316 [2024-10-27 11:31:36.523994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.316 [2024-10-27 11:31:36.527050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.316 [2024-10-27 11:31:36.527086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:51.316 [2024-10-27 11:31:36.527102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:16:51.316 [2024-10-27 11:31:36.527110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.316 [2024-10-27 11:31:36.527233] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:51.316 [2024-10-27 11:31:36.527990] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:51.316 [2024-10-27 11:31:36.528014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.316 [2024-10-27 11:31:36.528023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:51.316 [2024-10-27 11:31:36.528034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:16:51.316 [2024-10-27 11:31:36.528043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.316 [2024-10-27 11:31:36.528155] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 140afd10-86e9-4fa8-ad6f-6665596a139a 00:16:51.316 [2024-10-27 11:31:36.529586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.316 [2024-10-27 11:31:36.529622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:51.316 [2024-10-27 11:31:36.529634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:51.316 [2024-10-27 11:31:36.529645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.316 [2024-10-27 11:31:36.536987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.316 [2024-10-27 11:31:36.537104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:51.316 [2024-10-27 11:31:36.537163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.268 ms 00:16:51.316 [2024-10-27 11:31:36.537198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.316 [2024-10-27 11:31:36.537403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.316 [2024-10-27 11:31:36.537446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:51.316 [2024-10-27 11:31:36.537473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:51.316 [2024-10-27 11:31:36.537643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.316 [2024-10-27 11:31:36.537715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.316 [2024-10-27 11:31:36.537769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:51.316 [2024-10-27 11:31:36.538053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:51.316 [2024-10-27 11:31:36.538108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.316 [2024-10-27 11:31:36.538217] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:51.316 [2024-10-27 11:31:36.542327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.316 [2024-10-27 11:31:36.542431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:51.316 [2024-10-27 11:31:36.542491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.115 ms 00:16:51.316 [2024-10-27 11:31:36.542538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.316 [2024-10-27 11:31:36.542635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.316 [2024-10-27 11:31:36.542700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:51.316 [2024-10-27 11:31:36.542734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:51.316 [2024-10-27 11:31:36.542792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.316 [2024-10-27 11:31:36.542871] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:51.316 [2024-10-27 11:31:36.543096] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:51.316 [2024-10-27 11:31:36.543175] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:51.316 [2024-10-27 11:31:36.543238] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:51.316 [2024-10-27 11:31:36.543313] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:51.316 [2024-10-27 11:31:36.543393] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:51.316 [2024-10-27 11:31:36.543434] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:51.316 [2024-10-27 11:31:36.543512] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:51.316 [2024-10-27 11:31:36.543538] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:51.316 [2024-10-27 11:31:36.543557] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:51.316 [2024-10-27 11:31:36.543580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.316 [2024-10-27 11:31:36.543607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:51.316 [2024-10-27 11:31:36.543682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:16:51.316 [2024-10-27 11:31:36.543709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.316 [2024-10-27 11:31:36.543827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.316 [2024-10-27 11:31:36.543855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:51.316 [2024-10-27 11:31:36.543889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:51.316 [2024-10-27 11:31:36.543909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.316 [2024-10-27 11:31:36.544044] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:51.316 [2024-10-27 11:31:36.544073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:51.316 [2024-10-27 11:31:36.544100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.316 [2024-10-27 11:31:36.544155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.316 [2024-10-27 11:31:36.544210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:51.316 [2024-10-27 11:31:36.544237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:51.316 [2024-10-27 11:31:36.544283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:51.316 [2024-10-27 11:31:36.544342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:51.316 [2024-10-27 11:31:36.544372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:51.316 [2024-10-27 11:31:36.544483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.316 [2024-10-27 11:31:36.544510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:51.316 [2024-10-27 11:31:36.544533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:51.316 [2024-10-27 11:31:36.544619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.316 [2024-10-27 11:31:36.544646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:51.316 [2024-10-27 11:31:36.544668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:51.316 [2024-10-27 11:31:36.544688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.316 [2024-10-27 11:31:36.544710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:51.316 [2024-10-27 11:31:36.544733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:51.316 [2024-10-27 11:31:36.544789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.316 [2024-10-27 11:31:36.544960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:51.316 [2024-10-27 11:31:36.545056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:51.316 [2024-10-27 11:31:36.545083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.316 [2024-10-27 11:31:36.545105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:51.316 [2024-10-27 11:31:36.545150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:51.316 [2024-10-27 11:31:36.545172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.316 [2024-10-27 11:31:36.545217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:51.316 [2024-10-27 11:31:36.545246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:51.316 [2024-10-27 11:31:36.545316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.316 [2024-10-27 11:31:36.545343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:51.317 [2024-10-27 11:31:36.545367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:51.317 [2024-10-27 11:31:36.545388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.317 [2024-10-27 11:31:36.545407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:51.317 [2024-10-27 11:31:36.545434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:51.317 [2024-10-27 11:31:36.545493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.317 [2024-10-27 11:31:36.545523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:51.317 [2024-10-27 11:31:36.545543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:51.317 [2024-10-27 11:31:36.545564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.317 [2024-10-27 11:31:36.545583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:51.317 [2024-10-27 11:31:36.545640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:51.317 [2024-10-27 11:31:36.545805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.317 [2024-10-27 11:31:36.545819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:51.317 [2024-10-27 11:31:36.545826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:51.317 [2024-10-27 11:31:36.545835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.317 [2024-10-27 11:31:36.545842] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:51.317 [2024-10-27 11:31:36.545854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:51.317 [2024-10-27 11:31:36.545861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.317 [2024-10-27 11:31:36.545871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.317 [2024-10-27 11:31:36.545879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:51.317 [2024-10-27 11:31:36.545892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:51.317 [2024-10-27 11:31:36.545899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:51.317 [2024-10-27 11:31:36.545909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:51.317 [2024-10-27 11:31:36.545915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:51.317 [2024-10-27 11:31:36.545924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:51.317 [2024-10-27 11:31:36.545935] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:51.317 [2024-10-27 11:31:36.545947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.317 [2024-10-27 11:31:36.545957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:51.317 [2024-10-27 11:31:36.545966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:51.317 [2024-10-27 11:31:36.545974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:51.317 [2024-10-27 11:31:36.545983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:51.317 [2024-10-27 11:31:36.545991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:51.317 [2024-10-27 11:31:36.545999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:51.317 [2024-10-27 11:31:36.546007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:51.317 [2024-10-27 11:31:36.546016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:51.317 [2024-10-27 11:31:36.546023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:51.317 [2024-10-27 11:31:36.546034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:51.317 [2024-10-27 11:31:36.546041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:51.317 [2024-10-27 11:31:36.546051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:51.317 [2024-10-27 11:31:36.546058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:51.317 [2024-10-27 11:31:36.546068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:51.317 [2024-10-27 11:31:36.546076] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:51.317 [2024-10-27 11:31:36.546087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.317 [2024-10-27 11:31:36.546095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:51.317 [2024-10-27 11:31:36.546105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:51.317 [2024-10-27 11:31:36.546112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:51.317 [2024-10-27 11:31:36.546122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:51.317 [2024-10-27 11:31:36.546130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.317 [2024-10-27 11:31:36.546144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:51.317 [2024-10-27 11:31:36.546152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.156 ms 00:16:51.317 [2024-10-27 11:31:36.546162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.317 [2024-10-27 11:31:36.546238] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:51.317 [2024-10-27 11:31:36.546253] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:53.948 [2024-10-27 11:31:38.891378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:38.891627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:53.948 [2024-10-27 11:31:38.891725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2345.124 ms 00:16:53.948 [2024-10-27 11:31:38.891756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.948 [2024-10-27 11:31:38.920242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:38.920417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:53.948 [2024-10-27 11:31:38.920574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.222 ms 00:16:53.948 [2024-10-27 11:31:38.920648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.948 [2024-10-27 11:31:38.920820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:38.920857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:53.948 [2024-10-27 11:31:38.920919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:53.948 [2024-10-27 11:31:38.920955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.948 [2024-10-27 11:31:38.964380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:38.964539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:53.948 [2024-10-27 11:31:38.964619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.363 ms 00:16:53.948 [2024-10-27 11:31:38.964655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.948 [2024-10-27 11:31:38.964756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:38.964793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:53.948 [2024-10-27 11:31:38.964819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:53.948 [2024-10-27 11:31:38.964883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.948 [2024-10-27 11:31:38.965336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:38.965439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:53.948 [2024-10-27 11:31:38.965497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:16:53.948 [2024-10-27 11:31:38.965524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.948 [2024-10-27 11:31:38.965658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:38.965688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:53.948 [2024-10-27 11:31:38.965741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:53.948 [2024-10-27 11:31:38.965773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.948 [2024-10-27 11:31:38.982908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:38.983017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:53.948 [2024-10-27 11:31:38.983075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.080 ms 00:16:53.948 [2024-10-27 11:31:38.983105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.948 [2024-10-27 11:31:38.995397] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:53.948 [2024-10-27 11:31:39.012725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:39.012827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:53.948 [2024-10-27 11:31:39.012884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.498 ms 00:16:53.948 [2024-10-27 11:31:39.012915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.948 [2024-10-27 11:31:39.084907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:39.085042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:53.948 [2024-10-27 11:31:39.085113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.907 ms 00:16:53.948 [2024-10-27 11:31:39.085390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.948 [2024-10-27 11:31:39.085625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:39.085663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:53.948 [2024-10-27 11:31:39.085728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:16:53.948 [2024-10-27 11:31:39.085758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.948 [2024-10-27 11:31:39.109700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.948 [2024-10-27 11:31:39.109802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:53.948 [2024-10-27 11:31:39.109868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.849 ms 00:16:53.949 [2024-10-27 11:31:39.109892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.949 [2024-10-27 11:31:39.132772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.949 [2024-10-27 11:31:39.132897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:53.949 [2024-10-27 11:31:39.132975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.569 ms 00:16:53.949 [2024-10-27 11:31:39.133024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.949 [2024-10-27 11:31:39.133852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.949 [2024-10-27 11:31:39.133981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:53.949 [2024-10-27 11:31:39.134049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:16:53.949 [2024-10-27 11:31:39.134078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.949 [2024-10-27 11:31:39.205894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.949 [2024-10-27 11:31:39.205997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:53.949 [2024-10-27 11:31:39.206055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.754 ms 00:16:53.949 [2024-10-27 11:31:39.206107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.210 [2024-10-27 11:31:39.231439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.210 [2024-10-27 11:31:39.231567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:54.210 [2024-10-27 11:31:39.231633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.955 ms 00:16:54.210 [2024-10-27 11:31:39.231663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.210 [2024-10-27 11:31:39.255573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.210 [2024-10-27 11:31:39.255686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:54.210 [2024-10-27 11:31:39.255751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.525 ms 00:16:54.210 [2024-10-27 11:31:39.255780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.210 [2024-10-27 11:31:39.279601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.210 [2024-10-27 11:31:39.279724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:54.210 [2024-10-27 11:31:39.279790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.341 ms 00:16:54.210 [2024-10-27 11:31:39.279852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.210 [2024-10-27 11:31:39.279935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.210 [2024-10-27 11:31:39.280041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:54.210 [2024-10-27 11:31:39.280085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:54.210 [2024-10-27 11:31:39.280149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.210 [2024-10-27 11:31:39.280260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:54.210 [2024-10-27 11:31:39.280290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:54.210 [2024-10-27 11:31:39.280368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:54.210 [2024-10-27 11:31:39.280395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:54.210 [2024-10-27 11:31:39.281808] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:54.210 [2024-10-27 11:31:39.284941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2757.394 ms, result 0 00:16:54.210 [2024-10-27 11:31:39.285718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:54.210 { 00:16:54.210 "name": "ftl0", 00:16:54.210 "uuid": "140afd10-86e9-4fa8-ad6f-6665596a139a" 00:16:54.210 } 00:16:54.210 11:31:39 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:54.210 11:31:39 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:54.210 11:31:39 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.210 11:31:39 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:54.210 11:31:39 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.210 11:31:39 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.210 11:31:39 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:54.492 11:31:39 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:54.493 [ 00:16:54.493 { 00:16:54.493 "name": "ftl0", 00:16:54.493 "aliases": [ 00:16:54.493 "140afd10-86e9-4fa8-ad6f-6665596a139a" 00:16:54.493 ], 00:16:54.493 "product_name": "FTL disk", 00:16:54.493 "block_size": 4096, 00:16:54.493 "num_blocks": 23592960, 00:16:54.493 "uuid": "140afd10-86e9-4fa8-ad6f-6665596a139a", 00:16:54.493 "assigned_rate_limits": { 00:16:54.493 "rw_ios_per_sec": 0, 00:16:54.493 "rw_mbytes_per_sec": 0, 00:16:54.493 "r_mbytes_per_sec": 0, 00:16:54.493 "w_mbytes_per_sec": 0 00:16:54.493 }, 00:16:54.493 "claimed": false, 00:16:54.493 "zoned": false, 00:16:54.493 "supported_io_types": { 00:16:54.493 "read": true, 00:16:54.493 "write": true, 00:16:54.493 "unmap": true, 00:16:54.493 "flush": true, 00:16:54.493 "reset": false, 00:16:54.493 "nvme_admin": false, 00:16:54.494 "nvme_io": false, 00:16:54.494 "nvme_io_md": false, 00:16:54.494 "write_zeroes": true, 00:16:54.494 "zcopy": false, 00:16:54.494 "get_zone_info": false, 00:16:54.494 "zone_management": false, 00:16:54.494 "zone_append": false, 00:16:54.494 "compare": false, 00:16:54.494 "compare_and_write": false, 00:16:54.494 "abort": false, 00:16:54.494 "seek_hole": false, 00:16:54.494 "seek_data": false, 00:16:54.494 "copy": false, 00:16:54.494 "nvme_iov_md": false 00:16:54.494 }, 00:16:54.494 "driver_specific": { 00:16:54.494 "ftl": { 00:16:54.494 "base_bdev": "921b0f12-90ce-451c-af68-3191a383f917", 00:16:54.494 "cache": "nvc0n1p0" 00:16:54.494 } 00:16:54.494 } 00:16:54.494 } 00:16:54.494 ] 00:16:54.494 11:31:39 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:54.494 11:31:39 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:54.494 11:31:39 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:54.766 11:31:39 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:54.766 11:31:39 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:55.026 11:31:40 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:55.026 { 00:16:55.026 "name": "ftl0", 00:16:55.026 "aliases": [ 00:16:55.026 "140afd10-86e9-4fa8-ad6f-6665596a139a" 00:16:55.026 ], 00:16:55.026 "product_name": "FTL disk", 00:16:55.026 "block_size": 4096, 00:16:55.026 "num_blocks": 23592960, 00:16:55.026 "uuid": "140afd10-86e9-4fa8-ad6f-6665596a139a", 00:16:55.026 "assigned_rate_limits": { 00:16:55.026 "rw_ios_per_sec": 0, 00:16:55.026 "rw_mbytes_per_sec": 0, 00:16:55.026 "r_mbytes_per_sec": 0, 00:16:55.026 "w_mbytes_per_sec": 0 00:16:55.026 }, 00:16:55.026 "claimed": false, 00:16:55.026 "zoned": false, 00:16:55.026 "supported_io_types": { 00:16:55.026 "read": true, 00:16:55.026 "write": true, 00:16:55.026 "unmap": true, 00:16:55.026 "flush": true, 00:16:55.026 "reset": false, 00:16:55.026 "nvme_admin": false, 00:16:55.026 "nvme_io": false, 00:16:55.026 "nvme_io_md": false, 00:16:55.026 "write_zeroes": true, 00:16:55.026 "zcopy": false, 00:16:55.026 "get_zone_info": false, 00:16:55.026 "zone_management": false, 00:16:55.026 "zone_append": false, 00:16:55.026 "compare": false, 00:16:55.026 "compare_and_write": false, 00:16:55.026 "abort": false, 00:16:55.026 "seek_hole": false, 00:16:55.026 "seek_data": false, 00:16:55.026 "copy": false, 00:16:55.026 "nvme_iov_md": false 00:16:55.026 }, 00:16:55.026 "driver_specific": { 00:16:55.026 "ftl": { 00:16:55.026 "base_bdev": "921b0f12-90ce-451c-af68-3191a383f917", 00:16:55.026 "cache": "nvc0n1p0" 00:16:55.026 } 00:16:55.026 } 00:16:55.026 } 00:16:55.026 ]' 00:16:55.026 11:31:40 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:55.026 11:31:40 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:55.026 11:31:40 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:55.026 [2024-10-27 11:31:40.281348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.026 [2024-10-27 11:31:40.281388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:55.026 [2024-10-27 11:31:40.281400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:55.026 [2024-10-27 11:31:40.281409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.026 [2024-10-27 11:31:40.281440] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:55.026 [2024-10-27 11:31:40.283665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.026 [2024-10-27 11:31:40.283691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:55.026 [2024-10-27 11:31:40.283707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.209 ms 00:16:55.026 [2024-10-27 11:31:40.283714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.026 [2024-10-27 11:31:40.284187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.026 [2024-10-27 11:31:40.284199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:55.026 [2024-10-27 11:31:40.284209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:16:55.026 [2024-10-27 11:31:40.284215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.026 [2024-10-27 11:31:40.287131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.026 [2024-10-27 11:31:40.287197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:55.026 [2024-10-27 11:31:40.287242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:16:55.026 [2024-10-27 11:31:40.287269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.026 [2024-10-27 11:31:40.292806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.026 [2024-10-27 11:31:40.292894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:55.026 [2024-10-27 11:31:40.292955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.462 ms 00:16:55.026 [2024-10-27 11:31:40.292974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.286 [2024-10-27 11:31:40.311855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.286 [2024-10-27 11:31:40.311951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:55.286 [2024-10-27 11:31:40.312005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.803 ms 00:16:55.286 [2024-10-27 11:31:40.312027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.286 [2024-10-27 11:31:40.324698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.286 [2024-10-27 11:31:40.324798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:55.286 [2024-10-27 11:31:40.324851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.611 ms 00:16:55.286 [2024-10-27 11:31:40.324875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.286 [2024-10-27 11:31:40.325060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.286 [2024-10-27 11:31:40.325200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:55.286 [2024-10-27 11:31:40.325246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:16:55.286 [2024-10-27 11:31:40.325265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.286 [2024-10-27 11:31:40.343315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.286 [2024-10-27 11:31:40.343403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:55.286 [2024-10-27 11:31:40.343452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.995 ms 00:16:55.286 [2024-10-27 11:31:40.343470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.286 [2024-10-27 11:31:40.360812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.286 [2024-10-27 11:31:40.360894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:55.286 [2024-10-27 11:31:40.360944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.286 ms 00:16:55.286 [2024-10-27 11:31:40.360965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.286 [2024-10-27 11:31:40.378335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.286 [2024-10-27 11:31:40.378418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:55.286 [2024-10-27 11:31:40.378483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.314 ms 00:16:55.286 [2024-10-27 11:31:40.378500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.286 [2024-10-27 11:31:40.395706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.286 [2024-10-27 11:31:40.395788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:55.286 [2024-10-27 11:31:40.395834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.113 ms 00:16:55.286 [2024-10-27 11:31:40.395855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.286 [2024-10-27 11:31:40.395908] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:55.286 [2024-10-27 11:31:40.395952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.396984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.397039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.397092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.397124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.397182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.397211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.397238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.397305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.397336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.397366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.397393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:55.286 [2024-10-27 11:31:40.397444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.397995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:55.287 [2024-10-27 11:31:40.398527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:55.287 [2024-10-27 11:31:40.398538] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 140afd10-86e9-4fa8-ad6f-6665596a139a 00:16:55.287 [2024-10-27 11:31:40.398544] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:55.287 [2024-10-27 11:31:40.398554] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:55.287 [2024-10-27 11:31:40.398560] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:55.287 [2024-10-27 11:31:40.398568] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:55.287 [2024-10-27 11:31:40.398573] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:55.287 [2024-10-27 11:31:40.398581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:55.287 [2024-10-27 11:31:40.398588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:55.287 [2024-10-27 11:31:40.398596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:55.287 [2024-10-27 11:31:40.398601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:55.287 [2024-10-27 11:31:40.398609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.287 [2024-10-27 11:31:40.398616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:55.287 [2024-10-27 11:31:40.398624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.702 ms 00:16:55.287 [2024-10-27 11:31:40.398630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.287 [2024-10-27 11:31:40.408968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.287 [2024-10-27 11:31:40.409046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:55.287 [2024-10-27 11:31:40.409095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.305 ms 00:16:55.287 [2024-10-27 11:31:40.409117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.287 [2024-10-27 11:31:40.409467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.287 [2024-10-27 11:31:40.409530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:55.287 [2024-10-27 11:31:40.409575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:16:55.287 [2024-10-27 11:31:40.409593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.287 [2024-10-27 11:31:40.446091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.287 [2024-10-27 11:31:40.446184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:55.287 [2024-10-27 11:31:40.446234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.287 [2024-10-27 11:31:40.446259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.287 [2024-10-27 11:31:40.446372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.287 [2024-10-27 11:31:40.446450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:55.287 [2024-10-27 11:31:40.446512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.287 [2024-10-27 11:31:40.446532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.287 [2024-10-27 11:31:40.446599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.287 [2024-10-27 11:31:40.446648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:55.287 [2024-10-27 11:31:40.446675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.287 [2024-10-27 11:31:40.446691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.287 [2024-10-27 11:31:40.446756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.287 [2024-10-27 11:31:40.446779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:55.287 [2024-10-27 11:31:40.446797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.287 [2024-10-27 11:31:40.446813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.287 [2024-10-27 11:31:40.512629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.287 [2024-10-27 11:31:40.512746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:55.287 [2024-10-27 11:31:40.512859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.288 [2024-10-27 11:31:40.512888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.546 [2024-10-27 11:31:40.564104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.546 [2024-10-27 11:31:40.564217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:55.546 [2024-10-27 11:31:40.564258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.546 [2024-10-27 11:31:40.564278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.546 [2024-10-27 11:31:40.564384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.546 [2024-10-27 11:31:40.564442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.546 [2024-10-27 11:31:40.564486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.546 [2024-10-27 11:31:40.564503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.546 [2024-10-27 11:31:40.564630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.546 [2024-10-27 11:31:40.564645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.546 [2024-10-27 11:31:40.564654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.546 [2024-10-27 11:31:40.564660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.546 [2024-10-27 11:31:40.564759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.546 [2024-10-27 11:31:40.564771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.546 [2024-10-27 11:31:40.564780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.546 [2024-10-27 11:31:40.564786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.546 [2024-10-27 11:31:40.564833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.546 [2024-10-27 11:31:40.564844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:55.546 [2024-10-27 11:31:40.564855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.546 [2024-10-27 11:31:40.564862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.546 [2024-10-27 11:31:40.564907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.546 [2024-10-27 11:31:40.564918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.546 [2024-10-27 11:31:40.564931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.546 [2024-10-27 11:31:40.564937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.547 [2024-10-27 11:31:40.564988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:55.547 [2024-10-27 11:31:40.565001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.547 [2024-10-27 11:31:40.565011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:55.547 [2024-10-27 11:31:40.565017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.547 [2024-10-27 11:31:40.565183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 283.821 ms, result 0 00:16:55.547 true 00:16:55.547 11:31:40 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73404 00:16:55.547 11:31:40 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73404 ']' 00:16:55.547 11:31:40 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73404 00:16:55.547 11:31:40 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:55.547 11:31:40 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.547 11:31:40 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73404 00:16:55.547 killing process with pid 73404 00:16:55.547 11:31:40 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:55.547 11:31:40 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:55.547 11:31:40 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73404' 00:16:55.547 11:31:40 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73404 00:16:55.547 11:31:40 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73404 00:17:02.131 11:31:46 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:02.703 65536+0 records in 00:17:02.703 65536+0 records out 00:17:02.703 268435456 bytes (268 MB, 256 MiB) copied, 1.1042 s, 243 MB/s 00:17:02.703 11:31:47 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:02.703 [2024-10-27 11:31:47.783020] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:17:02.704 [2024-10-27 11:31:47.783162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73589 ] 00:17:02.704 [2024-10-27 11:31:47.946716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.964 [2024-10-27 11:31:48.051837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.224 [2024-10-27 11:31:48.342181] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:03.224 [2024-10-27 11:31:48.342266] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:03.487 [2024-10-27 11:31:48.505291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.505566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:03.487 [2024-10-27 11:31:48.505592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:03.487 [2024-10-27 11:31:48.505602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.508605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.508793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:03.487 [2024-10-27 11:31:48.508813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.973 ms 00:17:03.487 [2024-10-27 11:31:48.508822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.509063] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:03.487 [2024-10-27 11:31:48.510117] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:03.487 [2024-10-27 11:31:48.510184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.510196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:03.487 [2024-10-27 11:31:48.510207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:17:03.487 [2024-10-27 11:31:48.510215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.512044] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:03.487 [2024-10-27 11:31:48.526940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.526991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:03.487 [2024-10-27 11:31:48.527014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.897 ms 00:17:03.487 [2024-10-27 11:31:48.527022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.527158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.527171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:03.487 [2024-10-27 11:31:48.527181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:03.487 [2024-10-27 11:31:48.527188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.535685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.535736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:03.487 [2024-10-27 11:31:48.535748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.451 ms 00:17:03.487 [2024-10-27 11:31:48.535755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.535865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.535876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:03.487 [2024-10-27 11:31:48.535885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:03.487 [2024-10-27 11:31:48.535893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.535924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.535933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:03.487 [2024-10-27 11:31:48.535945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:03.487 [2024-10-27 11:31:48.535953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.535975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:03.487 [2024-10-27 11:31:48.540130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.540173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:03.487 [2024-10-27 11:31:48.540184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.161 ms 00:17:03.487 [2024-10-27 11:31:48.540192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.540271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.540281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:03.487 [2024-10-27 11:31:48.540314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:03.487 [2024-10-27 11:31:48.540323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.540348] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:03.487 [2024-10-27 11:31:48.540372] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:03.487 [2024-10-27 11:31:48.540413] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:03.487 [2024-10-27 11:31:48.540430] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:03.487 [2024-10-27 11:31:48.540554] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:03.487 [2024-10-27 11:31:48.540567] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:03.487 [2024-10-27 11:31:48.540579] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:03.487 [2024-10-27 11:31:48.540590] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:03.487 [2024-10-27 11:31:48.540600] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:03.487 [2024-10-27 11:31:48.540612] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:03.487 [2024-10-27 11:31:48.540619] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:03.487 [2024-10-27 11:31:48.540628] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:03.487 [2024-10-27 11:31:48.540635] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:03.487 [2024-10-27 11:31:48.540643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.540651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:03.487 [2024-10-27 11:31:48.540659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:17:03.487 [2024-10-27 11:31:48.540667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.540757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.487 [2024-10-27 11:31:48.540767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:03.487 [2024-10-27 11:31:48.540775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:03.487 [2024-10-27 11:31:48.540785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.487 [2024-10-27 11:31:48.540886] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:03.487 [2024-10-27 11:31:48.540897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:03.487 [2024-10-27 11:31:48.540905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.487 [2024-10-27 11:31:48.540913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.487 [2024-10-27 11:31:48.540921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:03.487 [2024-10-27 11:31:48.540929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:03.487 [2024-10-27 11:31:48.540936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:03.487 [2024-10-27 11:31:48.540943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:03.487 [2024-10-27 11:31:48.540950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:03.487 [2024-10-27 11:31:48.540957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.487 [2024-10-27 11:31:48.540963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:03.487 [2024-10-27 11:31:48.540970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:03.487 [2024-10-27 11:31:48.540977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.487 [2024-10-27 11:31:48.540994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:03.487 [2024-10-27 11:31:48.541001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:03.487 [2024-10-27 11:31:48.541014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.487 [2024-10-27 11:31:48.541021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:03.487 [2024-10-27 11:31:48.541028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:03.487 [2024-10-27 11:31:48.541035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.487 [2024-10-27 11:31:48.541042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:03.487 [2024-10-27 11:31:48.541049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:03.487 [2024-10-27 11:31:48.541056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.487 [2024-10-27 11:31:48.541063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:03.487 [2024-10-27 11:31:48.541070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:03.487 [2024-10-27 11:31:48.541076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.488 [2024-10-27 11:31:48.541083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:03.488 [2024-10-27 11:31:48.541090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:03.488 [2024-10-27 11:31:48.541097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.488 [2024-10-27 11:31:48.541104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:03.488 [2024-10-27 11:31:48.541111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:03.488 [2024-10-27 11:31:48.541117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.488 [2024-10-27 11:31:48.541124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:03.488 [2024-10-27 11:31:48.541131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:03.488 [2024-10-27 11:31:48.541138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.488 [2024-10-27 11:31:48.541145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:03.488 [2024-10-27 11:31:48.541152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:03.488 [2024-10-27 11:31:48.541159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.488 [2024-10-27 11:31:48.541166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:03.488 [2024-10-27 11:31:48.541172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:03.488 [2024-10-27 11:31:48.541178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.488 [2024-10-27 11:31:48.541185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:03.488 [2024-10-27 11:31:48.541192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:03.488 [2024-10-27 11:31:48.541200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.488 [2024-10-27 11:31:48.541206] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:03.488 [2024-10-27 11:31:48.541214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:03.488 [2024-10-27 11:31:48.541222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.488 [2024-10-27 11:31:48.541229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.488 [2024-10-27 11:31:48.541243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:03.488 [2024-10-27 11:31:48.541251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:03.488 [2024-10-27 11:31:48.541258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:03.488 [2024-10-27 11:31:48.541265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:03.488 [2024-10-27 11:31:48.541271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:03.488 [2024-10-27 11:31:48.541279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:03.488 [2024-10-27 11:31:48.541287] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:03.488 [2024-10-27 11:31:48.541312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.488 [2024-10-27 11:31:48.541321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:03.488 [2024-10-27 11:31:48.541328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:03.488 [2024-10-27 11:31:48.541335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:03.488 [2024-10-27 11:31:48.541343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:03.488 [2024-10-27 11:31:48.541350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:03.488 [2024-10-27 11:31:48.541358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:03.488 [2024-10-27 11:31:48.541365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:03.488 [2024-10-27 11:31:48.541373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:03.488 [2024-10-27 11:31:48.541380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:03.488 [2024-10-27 11:31:48.541387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:03.488 [2024-10-27 11:31:48.541394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:03.488 [2024-10-27 11:31:48.541401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:03.488 [2024-10-27 11:31:48.541409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:03.488 [2024-10-27 11:31:48.541417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:03.488 [2024-10-27 11:31:48.541424] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:03.488 [2024-10-27 11:31:48.541432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.488 [2024-10-27 11:31:48.541441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:03.488 [2024-10-27 11:31:48.541449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:03.488 [2024-10-27 11:31:48.541457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:03.488 [2024-10-27 11:31:48.541466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:03.488 [2024-10-27 11:31:48.541473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.541481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:03.488 [2024-10-27 11:31:48.541491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:17:03.488 [2024-10-27 11:31:48.541501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.573037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.573086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:03.488 [2024-10-27 11:31:48.573097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.476 ms 00:17:03.488 [2024-10-27 11:31:48.573105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.573236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.573247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:03.488 [2024-10-27 11:31:48.573261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:03.488 [2024-10-27 11:31:48.573269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.618707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.618745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:03.488 [2024-10-27 11:31:48.618757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.416 ms 00:17:03.488 [2024-10-27 11:31:48.618765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.618860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.618871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:03.488 [2024-10-27 11:31:48.618879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:03.488 [2024-10-27 11:31:48.618886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.619228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.619244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:03.488 [2024-10-27 11:31:48.619253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:17:03.488 [2024-10-27 11:31:48.619261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.619412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.619422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:03.488 [2024-10-27 11:31:48.619430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:03.488 [2024-10-27 11:31:48.619437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.633079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.633110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:03.488 [2024-10-27 11:31:48.633120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.621 ms 00:17:03.488 [2024-10-27 11:31:48.633128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.646195] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:03.488 [2024-10-27 11:31:48.646243] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:03.488 [2024-10-27 11:31:48.646256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.646264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:03.488 [2024-10-27 11:31:48.646273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.030 ms 00:17:03.488 [2024-10-27 11:31:48.646279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.671246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.671283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:03.488 [2024-10-27 11:31:48.671316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.878 ms 00:17:03.488 [2024-10-27 11:31:48.671324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.683392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.683427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:03.488 [2024-10-27 11:31:48.683438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.993 ms 00:17:03.488 [2024-10-27 11:31:48.683445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.695633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.695669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:03.488 [2024-10-27 11:31:48.695681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.114 ms 00:17:03.488 [2024-10-27 11:31:48.695688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.488 [2024-10-27 11:31:48.696322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.488 [2024-10-27 11:31:48.696396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:03.488 [2024-10-27 11:31:48.696410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:17:03.488 [2024-10-27 11:31:48.696418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.489 [2024-10-27 11:31:48.757618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.489 [2024-10-27 11:31:48.757679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:03.489 [2024-10-27 11:31:48.757694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.171 ms 00:17:03.489 [2024-10-27 11:31:48.757702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.749 [2024-10-27 11:31:48.769673] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:03.749 [2024-10-27 11:31:48.787434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.749 [2024-10-27 11:31:48.787650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:03.749 [2024-10-27 11:31:48.787670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.619 ms 00:17:03.749 [2024-10-27 11:31:48.787679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.749 [2024-10-27 11:31:48.787779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.749 [2024-10-27 11:31:48.787791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:03.749 [2024-10-27 11:31:48.787803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:03.749 [2024-10-27 11:31:48.787811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.749 [2024-10-27 11:31:48.787867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.749 [2024-10-27 11:31:48.787876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:03.749 [2024-10-27 11:31:48.787886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:03.749 [2024-10-27 11:31:48.787894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.749 [2024-10-27 11:31:48.787916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.749 [2024-10-27 11:31:48.787929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:03.749 [2024-10-27 11:31:48.787937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:03.749 [2024-10-27 11:31:48.787948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.749 [2024-10-27 11:31:48.787986] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:03.749 [2024-10-27 11:31:48.787997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.749 [2024-10-27 11:31:48.788006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:03.749 [2024-10-27 11:31:48.788014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:03.749 [2024-10-27 11:31:48.788022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.749 [2024-10-27 11:31:48.813629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.749 [2024-10-27 11:31:48.813677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:03.749 [2024-10-27 11:31:48.813696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.584 ms 00:17:03.749 [2024-10-27 11:31:48.813705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.749 [2024-10-27 11:31:48.813822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.749 [2024-10-27 11:31:48.813834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:03.749 [2024-10-27 11:31:48.813844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:03.749 [2024-10-27 11:31:48.813853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.749 [2024-10-27 11:31:48.814952] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.749 [2024-10-27 11:31:48.818469] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.343 ms, result 0 00:17:03.749 [2024-10-27 11:31:48.819892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:03.749 [2024-10-27 11:31:48.833584] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:04.694  [2024-10-27T11:31:50.929Z] Copying: 15/256 [MB] (15 MBps) [2024-10-27T11:31:51.874Z] Copying: 35/256 [MB] (19 MBps) [2024-10-27T11:31:53.262Z] Copying: 49/256 [MB] (13 MBps) [2024-10-27T11:31:54.204Z] Copying: 69/256 [MB] (20 MBps) [2024-10-27T11:31:55.149Z] Copying: 116/256 [MB] (46 MBps) [2024-10-27T11:31:56.093Z] Copying: 131/256 [MB] (14 MBps) [2024-10-27T11:31:57.036Z] Copying: 143/256 [MB] (12 MBps) [2024-10-27T11:31:57.981Z] Copying: 154/256 [MB] (10 MBps) [2024-10-27T11:31:58.924Z] Copying: 164/256 [MB] (10 MBps) [2024-10-27T11:31:59.866Z] Copying: 174/256 [MB] (10 MBps) [2024-10-27T11:32:01.253Z] Copying: 191/256 [MB] (16 MBps) [2024-10-27T11:32:02.198Z] Copying: 208/256 [MB] (17 MBps) [2024-10-27T11:32:03.145Z] Copying: 225/256 [MB] (16 MBps) [2024-10-27T11:32:04.090Z] Copying: 236/256 [MB] (10 MBps) [2024-10-27T11:32:04.353Z] Copying: 253/256 [MB] (16 MBps) [2024-10-27T11:32:04.353Z] Copying: 256/256 [MB] (average 16 MBps)[2024-10-27 11:32:04.131664] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.072 [2024-10-27 11:32:04.142008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.142073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:19.072 [2024-10-27 11:32:04.142090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:19.072 [2024-10-27 11:32:04.142099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.142123] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:19.072 [2024-10-27 11:32:04.145207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.145248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:19.072 [2024-10-27 11:32:04.145269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.068 ms 00:17:19.072 [2024-10-27 11:32:04.145277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.148165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.148217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:19.072 [2024-10-27 11:32:04.148229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:17:19.072 [2024-10-27 11:32:04.148238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.157044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.157095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:19.072 [2024-10-27 11:32:04.157107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.787 ms 00:17:19.072 [2024-10-27 11:32:04.157123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.164264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.164323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:19.072 [2024-10-27 11:32:04.164335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.092 ms 00:17:19.072 [2024-10-27 11:32:04.164343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.190902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.190949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:19.072 [2024-10-27 11:32:04.190961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.469 ms 00:17:19.072 [2024-10-27 11:32:04.190968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.207630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.207678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:19.072 [2024-10-27 11:32:04.207692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.594 ms 00:17:19.072 [2024-10-27 11:32:04.207709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.207868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.207879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:19.072 [2024-10-27 11:32:04.207889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:19.072 [2024-10-27 11:32:04.207896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.234145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.234189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:19.072 [2024-10-27 11:32:04.234201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.231 ms 00:17:19.072 [2024-10-27 11:32:04.234209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.260688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.260734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:19.072 [2024-10-27 11:32:04.260746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.414 ms 00:17:19.072 [2024-10-27 11:32:04.260753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.285994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.286202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:19.072 [2024-10-27 11:32:04.286223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.174 ms 00:17:19.072 [2024-10-27 11:32:04.286231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.311882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.072 [2024-10-27 11:32:04.311929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:19.072 [2024-10-27 11:32:04.311941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.494 ms 00:17:19.072 [2024-10-27 11:32:04.311947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.072 [2024-10-27 11:32:04.312012] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:19.072 [2024-10-27 11:32:04.312028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:19.072 [2024-10-27 11:32:04.312247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:19.073 [2024-10-27 11:32:04.312890] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:19.073 [2024-10-27 11:32:04.312902] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 140afd10-86e9-4fa8-ad6f-6665596a139a 00:17:19.073 [2024-10-27 11:32:04.312910] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:19.073 [2024-10-27 11:32:04.312919] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:19.073 [2024-10-27 11:32:04.312927] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:19.073 [2024-10-27 11:32:04.312936] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:19.073 [2024-10-27 11:32:04.312944] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:19.073 [2024-10-27 11:32:04.312952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:19.073 [2024-10-27 11:32:04.312960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:19.073 [2024-10-27 11:32:04.312967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:19.073 [2024-10-27 11:32:04.312973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:19.073 [2024-10-27 11:32:04.312980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.073 [2024-10-27 11:32:04.312988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:19.073 [2024-10-27 11:32:04.312997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:17:19.073 [2024-10-27 11:32:04.313005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.073 [2024-10-27 11:32:04.326889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.073 [2024-10-27 11:32:04.326930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:19.073 [2024-10-27 11:32:04.326942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.860 ms 00:17:19.073 [2024-10-27 11:32:04.326949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.073 [2024-10-27 11:32:04.327374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.073 [2024-10-27 11:32:04.327391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:19.073 [2024-10-27 11:32:04.327408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:17:19.073 [2024-10-27 11:32:04.327416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.366604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.366654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.336 [2024-10-27 11:32:04.366665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.366673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.366765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.366775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.336 [2024-10-27 11:32:04.366786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.366794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.366854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.366864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.336 [2024-10-27 11:32:04.366872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.366879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.366896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.366905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.336 [2024-10-27 11:32:04.366913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.366924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.453334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.453386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.336 [2024-10-27 11:32:04.453414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.453423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.523957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.524011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.336 [2024-10-27 11:32:04.524024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.524039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.524095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.524105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.336 [2024-10-27 11:32:04.524115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.524123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.524155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.524165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.336 [2024-10-27 11:32:04.524174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.524182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.524283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.524324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.336 [2024-10-27 11:32:04.524334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.524343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.524377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.524387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:19.336 [2024-10-27 11:32:04.524396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.524404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.524450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.524462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.336 [2024-10-27 11:32:04.524471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.524479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.524541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:19.336 [2024-10-27 11:32:04.524551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.336 [2024-10-27 11:32:04.524561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:19.336 [2024-10-27 11:32:04.524569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.336 [2024-10-27 11:32:04.524730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.709 ms, result 0 00:17:20.280 00:17:20.280 00:17:20.280 11:32:05 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73781 00:17:20.280 11:32:05 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:20.280 11:32:05 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73781 00:17:20.280 11:32:05 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73781 ']' 00:17:20.280 11:32:05 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.280 11:32:05 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:20.280 11:32:05 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.280 11:32:05 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:20.280 11:32:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:20.544 [2024-10-27 11:32:05.633059] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:17:20.544 [2024-10-27 11:32:05.633498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73781 ] 00:17:20.544 [2024-10-27 11:32:05.798662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.804 [2024-10-27 11:32:05.919841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.377 11:32:06 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:21.377 11:32:06 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:21.377 11:32:06 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:21.640 [2024-10-27 11:32:06.808023] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.640 [2024-10-27 11:32:06.808101] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.902 [2024-10-27 11:32:06.986528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.902 [2024-10-27 11:32:06.986798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:21.902 [2024-10-27 11:32:06.986829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:21.902 [2024-10-27 11:32:06.986840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.902 [2024-10-27 11:32:06.989848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.902 [2024-10-27 11:32:06.990034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.902 [2024-10-27 11:32:06.990060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.980 ms 00:17:21.902 [2024-10-27 11:32:06.990069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.902 [2024-10-27 11:32:06.990338] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:21.902 [2024-10-27 11:32:06.991190] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:21.902 [2024-10-27 11:32:06.991243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.902 [2024-10-27 11:32:06.991252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.902 [2024-10-27 11:32:06.991264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:17:21.902 [2024-10-27 11:32:06.991272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.902 [2024-10-27 11:32:06.993106] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:21.902 [2024-10-27 11:32:07.007788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.902 [2024-10-27 11:32:07.007847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:21.902 [2024-10-27 11:32:07.007862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.691 ms 00:17:21.902 [2024-10-27 11:32:07.007873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.902 [2024-10-27 11:32:07.007989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.902 [2024-10-27 11:32:07.008004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:21.902 [2024-10-27 11:32:07.008013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:21.902 [2024-10-27 11:32:07.008024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.902 [2024-10-27 11:32:07.016336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.902 [2024-10-27 11:32:07.016387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.902 [2024-10-27 11:32:07.016398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.260 ms 00:17:21.902 [2024-10-27 11:32:07.016407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.902 [2024-10-27 11:32:07.016560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.902 [2024-10-27 11:32:07.016574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.902 [2024-10-27 11:32:07.016583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:21.902 [2024-10-27 11:32:07.016592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.903 [2024-10-27 11:32:07.016619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.903 [2024-10-27 11:32:07.016634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:21.903 [2024-10-27 11:32:07.016642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:21.903 [2024-10-27 11:32:07.016651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.903 [2024-10-27 11:32:07.016677] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:21.903 [2024-10-27 11:32:07.020735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.903 [2024-10-27 11:32:07.020778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.903 [2024-10-27 11:32:07.020790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.062 ms 00:17:21.903 [2024-10-27 11:32:07.020799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.903 [2024-10-27 11:32:07.020879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.903 [2024-10-27 11:32:07.020889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:21.903 [2024-10-27 11:32:07.020901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:21.903 [2024-10-27 11:32:07.020909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.903 [2024-10-27 11:32:07.020931] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:21.903 [2024-10-27 11:32:07.020954] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:21.903 [2024-10-27 11:32:07.020998] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:21.903 [2024-10-27 11:32:07.021014] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:21.903 [2024-10-27 11:32:07.021125] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:21.903 [2024-10-27 11:32:07.021137] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:21.903 [2024-10-27 11:32:07.021151] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:21.903 [2024-10-27 11:32:07.021162] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:21.903 [2024-10-27 11:32:07.021174] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:21.903 [2024-10-27 11:32:07.021183] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:21.903 [2024-10-27 11:32:07.021193] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:21.903 [2024-10-27 11:32:07.021201] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:21.903 [2024-10-27 11:32:07.021213] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:21.903 [2024-10-27 11:32:07.021221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.903 [2024-10-27 11:32:07.021231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:21.903 [2024-10-27 11:32:07.021239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:21.903 [2024-10-27 11:32:07.021248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.903 [2024-10-27 11:32:07.021350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.903 [2024-10-27 11:32:07.021362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:21.903 [2024-10-27 11:32:07.021374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:21.903 [2024-10-27 11:32:07.021384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.903 [2024-10-27 11:32:07.021487] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:21.903 [2024-10-27 11:32:07.021499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:21.903 [2024-10-27 11:32:07.021508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.903 [2024-10-27 11:32:07.021519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:21.903 [2024-10-27 11:32:07.021537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:21.903 [2024-10-27 11:32:07.021555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:21.903 [2024-10-27 11:32:07.021562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.903 [2024-10-27 11:32:07.021578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:21.903 [2024-10-27 11:32:07.021587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:21.903 [2024-10-27 11:32:07.021593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.903 [2024-10-27 11:32:07.021602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:21.903 [2024-10-27 11:32:07.021608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:21.903 [2024-10-27 11:32:07.021619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:21.903 [2024-10-27 11:32:07.021635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:21.903 [2024-10-27 11:32:07.021642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:21.903 [2024-10-27 11:32:07.021665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.903 [2024-10-27 11:32:07.021681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:21.903 [2024-10-27 11:32:07.021692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.903 [2024-10-27 11:32:07.021707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:21.903 [2024-10-27 11:32:07.021714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.903 [2024-10-27 11:32:07.021728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:21.903 [2024-10-27 11:32:07.021737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.903 [2024-10-27 11:32:07.021754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:21.903 [2024-10-27 11:32:07.021760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.903 [2024-10-27 11:32:07.021775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:21.903 [2024-10-27 11:32:07.021784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:21.903 [2024-10-27 11:32:07.021791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.903 [2024-10-27 11:32:07.021799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:21.903 [2024-10-27 11:32:07.021805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:21.903 [2024-10-27 11:32:07.021815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:21.903 [2024-10-27 11:32:07.021831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:21.903 [2024-10-27 11:32:07.021838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021846] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:21.903 [2024-10-27 11:32:07.021854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:21.903 [2024-10-27 11:32:07.021863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.903 [2024-10-27 11:32:07.021872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.903 [2024-10-27 11:32:07.021885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:21.903 [2024-10-27 11:32:07.021893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:21.903 [2024-10-27 11:32:07.021902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:21.903 [2024-10-27 11:32:07.021909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:21.903 [2024-10-27 11:32:07.021918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:21.903 [2024-10-27 11:32:07.021925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:21.903 [2024-10-27 11:32:07.021935] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:21.903 [2024-10-27 11:32:07.021946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.903 [2024-10-27 11:32:07.021960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:21.903 [2024-10-27 11:32:07.021968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:21.903 [2024-10-27 11:32:07.021978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:21.903 [2024-10-27 11:32:07.021986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:21.903 [2024-10-27 11:32:07.021996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:21.903 [2024-10-27 11:32:07.022004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:21.903 [2024-10-27 11:32:07.022013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:21.903 [2024-10-27 11:32:07.022020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:21.903 [2024-10-27 11:32:07.022029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:21.903 [2024-10-27 11:32:07.022037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:21.903 [2024-10-27 11:32:07.022047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:21.903 [2024-10-27 11:32:07.022055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:21.903 [2024-10-27 11:32:07.022065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:21.903 [2024-10-27 11:32:07.022073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:21.903 [2024-10-27 11:32:07.022082] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:21.903 [2024-10-27 11:32:07.022090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.904 [2024-10-27 11:32:07.022103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:21.904 [2024-10-27 11:32:07.022111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:21.904 [2024-10-27 11:32:07.022120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:21.904 [2024-10-27 11:32:07.022128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:21.904 [2024-10-27 11:32:07.022137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.022145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:21.904 [2024-10-27 11:32:07.022155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:17:21.904 [2024-10-27 11:32:07.022162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.055549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.055747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.904 [2024-10-27 11:32:07.055770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.322 ms 00:17:21.904 [2024-10-27 11:32:07.055779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.055919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.055932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:21.904 [2024-10-27 11:32:07.055944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:21.904 [2024-10-27 11:32:07.055951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.091260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.091435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.904 [2024-10-27 11:32:07.091460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.282 ms 00:17:21.904 [2024-10-27 11:32:07.091472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.091564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.091574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.904 [2024-10-27 11:32:07.091585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:21.904 [2024-10-27 11:32:07.091593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.092109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.092139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.904 [2024-10-27 11:32:07.092152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:17:21.904 [2024-10-27 11:32:07.092163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.092337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.092353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.904 [2024-10-27 11:32:07.092364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:17:21.904 [2024-10-27 11:32:07.092373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.110239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.110288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.904 [2024-10-27 11:32:07.110337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.839 ms 00:17:21.904 [2024-10-27 11:32:07.110345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.124680] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:21.904 [2024-10-27 11:32:07.124730] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:21.904 [2024-10-27 11:32:07.124745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.124753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:21.904 [2024-10-27 11:32:07.124766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.283 ms 00:17:21.904 [2024-10-27 11:32:07.124773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.150868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.150917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:21.904 [2024-10-27 11:32:07.150933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.000 ms 00:17:21.904 [2024-10-27 11:32:07.150942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.164224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.164269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:21.904 [2024-10-27 11:32:07.164287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.183 ms 00:17:21.904 [2024-10-27 11:32:07.164308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.176910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.176952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:21.904 [2024-10-27 11:32:07.176967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.508 ms 00:17:21.904 [2024-10-27 11:32:07.176975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.904 [2024-10-27 11:32:07.177661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.904 [2024-10-27 11:32:07.177684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:21.904 [2024-10-27 11:32:07.177697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:17:21.904 [2024-10-27 11:32:07.177712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.165 [2024-10-27 11:32:07.261266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.165 [2024-10-27 11:32:07.261351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:22.165 [2024-10-27 11:32:07.261374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.522 ms 00:17:22.166 [2024-10-27 11:32:07.261384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.166 [2024-10-27 11:32:07.272673] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:22.166 [2024-10-27 11:32:07.291837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.166 [2024-10-27 11:32:07.292073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:22.166 [2024-10-27 11:32:07.292096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.342 ms 00:17:22.166 [2024-10-27 11:32:07.292108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.166 [2024-10-27 11:32:07.292211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.166 [2024-10-27 11:32:07.292224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:22.166 [2024-10-27 11:32:07.292234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:22.166 [2024-10-27 11:32:07.292244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.166 [2024-10-27 11:32:07.292334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.166 [2024-10-27 11:32:07.292347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:22.166 [2024-10-27 11:32:07.292356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:22.166 [2024-10-27 11:32:07.292366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.166 [2024-10-27 11:32:07.292397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.166 [2024-10-27 11:32:07.292408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:22.166 [2024-10-27 11:32:07.292417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:22.166 [2024-10-27 11:32:07.292430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.166 [2024-10-27 11:32:07.292468] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:22.166 [2024-10-27 11:32:07.292484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.166 [2024-10-27 11:32:07.292492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:22.166 [2024-10-27 11:32:07.292504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:22.166 [2024-10-27 11:32:07.292515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.166 [2024-10-27 11:32:07.318083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.166 [2024-10-27 11:32:07.318133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:22.166 [2024-10-27 11:32:07.318150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.514 ms 00:17:22.166 [2024-10-27 11:32:07.318159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.166 [2024-10-27 11:32:07.318322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.166 [2024-10-27 11:32:07.318335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:22.166 [2024-10-27 11:32:07.318348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:22.166 [2024-10-27 11:32:07.318356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.166 [2024-10-27 11:32:07.319507] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:22.166 [2024-10-27 11:32:07.322918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.610 ms, result 0 00:17:22.166 [2024-10-27 11:32:07.325219] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:22.166 Some configs were skipped because the RPC state that can call them passed over. 00:17:22.166 11:32:07 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:22.427 [2024-10-27 11:32:07.566236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.427 [2024-10-27 11:32:07.566461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:22.427 [2024-10-27 11:32:07.566543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.425 ms 00:17:22.427 [2024-10-27 11:32:07.566571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.427 [2024-10-27 11:32:07.566632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.822 ms, result 0 00:17:22.427 true 00:17:22.427 11:32:07 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:22.689 [2024-10-27 11:32:07.781902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.689 [2024-10-27 11:32:07.782098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:22.689 [2024-10-27 11:32:07.782166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.792 ms 00:17:22.689 [2024-10-27 11:32:07.782191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.689 [2024-10-27 11:32:07.782253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.146 ms, result 0 00:17:22.689 true 00:17:22.689 11:32:07 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73781 00:17:22.689 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73781 ']' 00:17:22.689 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73781 00:17:22.689 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:22.689 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.689 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73781 00:17:22.689 killing process with pid 73781 00:17:22.689 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:22.689 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:22.689 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73781' 00:17:22.689 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73781 00:17:22.689 11:32:07 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73781 00:17:23.263 [2024-10-27 11:32:08.384579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.263 [2024-10-27 11:32:08.384620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:23.263 [2024-10-27 11:32:08.384630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:23.263 [2024-10-27 11:32:08.384638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.263 [2024-10-27 11:32:08.384655] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:23.263 [2024-10-27 11:32:08.386819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.263 [2024-10-27 11:32:08.386842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:23.263 [2024-10-27 11:32:08.386855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.151 ms 00:17:23.263 [2024-10-27 11:32:08.386861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.263 [2024-10-27 11:32:08.387094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.263 [2024-10-27 11:32:08.387102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:23.263 [2024-10-27 11:32:08.387110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:17:23.263 [2024-10-27 11:32:08.387116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.263 [2024-10-27 11:32:08.390262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.263 [2024-10-27 11:32:08.390287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:23.263 [2024-10-27 11:32:08.390303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.129 ms 00:17:23.263 [2024-10-27 11:32:08.390311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.263 [2024-10-27 11:32:08.395565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.263 [2024-10-27 11:32:08.395685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:23.263 [2024-10-27 11:32:08.395702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.225 ms 00:17:23.263 [2024-10-27 11:32:08.395708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.263 [2024-10-27 11:32:08.403035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.263 [2024-10-27 11:32:08.403134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:23.263 [2024-10-27 11:32:08.403149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.269 ms 00:17:23.263 [2024-10-27 11:32:08.403160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.263 [2024-10-27 11:32:08.409733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.263 [2024-10-27 11:32:08.409835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:23.263 [2024-10-27 11:32:08.409850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.543 ms 00:17:23.263 [2024-10-27 11:32:08.409858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.263 [2024-10-27 11:32:08.409965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.263 [2024-10-27 11:32:08.409973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:23.263 [2024-10-27 11:32:08.409980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:23.263 [2024-10-27 11:32:08.409986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.263 [2024-10-27 11:32:08.417834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.264 [2024-10-27 11:32:08.417859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:23.264 [2024-10-27 11:32:08.417868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.832 ms 00:17:23.264 [2024-10-27 11:32:08.417873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.264 [2024-10-27 11:32:08.425472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.264 [2024-10-27 11:32:08.425496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:23.264 [2024-10-27 11:32:08.425506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.567 ms 00:17:23.264 [2024-10-27 11:32:08.425512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.264 [2024-10-27 11:32:08.432495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.264 [2024-10-27 11:32:08.432683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:23.264 [2024-10-27 11:32:08.432698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.952 ms 00:17:23.264 [2024-10-27 11:32:08.432703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.264 [2024-10-27 11:32:08.439794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.264 [2024-10-27 11:32:08.439890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:23.264 [2024-10-27 11:32:08.439904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.005 ms 00:17:23.264 [2024-10-27 11:32:08.439909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.264 [2024-10-27 11:32:08.439943] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:23.264 [2024-10-27 11:32:08.439955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.439964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.439969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.439976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.439982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.439991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.439996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:23.264 [2024-10-27 11:32:08.440616] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:23.264 [2024-10-27 11:32:08.440624] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 140afd10-86e9-4fa8-ad6f-6665596a139a 00:17:23.264 [2024-10-27 11:32:08.440635] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:23.264 [2024-10-27 11:32:08.440643] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:23.264 [2024-10-27 11:32:08.440651] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:23.264 [2024-10-27 11:32:08.440658] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:23.264 [2024-10-27 11:32:08.440663] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:23.264 [2024-10-27 11:32:08.440670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:23.264 [2024-10-27 11:32:08.440675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:23.264 [2024-10-27 11:32:08.440681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:23.264 [2024-10-27 11:32:08.440686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:23.264 [2024-10-27 11:32:08.440692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.264 [2024-10-27 11:32:08.440698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:23.264 [2024-10-27 11:32:08.440706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:17:23.264 [2024-10-27 11:32:08.440711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.264 [2024-10-27 11:32:08.450273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.264 [2024-10-27 11:32:08.450305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:23.264 [2024-10-27 11:32:08.450316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.544 ms 00:17:23.264 [2024-10-27 11:32:08.450322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.264 [2024-10-27 11:32:08.450605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.264 [2024-10-27 11:32:08.450612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:23.264 [2024-10-27 11:32:08.450620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:23.264 [2024-10-27 11:32:08.450625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.264 [2024-10-27 11:32:08.485220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.264 [2024-10-27 11:32:08.485246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.264 [2024-10-27 11:32:08.485255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.264 [2024-10-27 11:32:08.485261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.264 [2024-10-27 11:32:08.485352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.264 [2024-10-27 11:32:08.485361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.264 [2024-10-27 11:32:08.485369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.264 [2024-10-27 11:32:08.485374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.264 [2024-10-27 11:32:08.485411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.264 [2024-10-27 11:32:08.485418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.264 [2024-10-27 11:32:08.485427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.264 [2024-10-27 11:32:08.485432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.264 [2024-10-27 11:32:08.485447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.264 [2024-10-27 11:32:08.485453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.264 [2024-10-27 11:32:08.485460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.264 [2024-10-27 11:32:08.485465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.526 [2024-10-27 11:32:08.544319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.526 [2024-10-27 11:32:08.544351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.526 [2024-10-27 11:32:08.544360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.526 [2024-10-27 11:32:08.544367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.526 [2024-10-27 11:32:08.592196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.526 [2024-10-27 11:32:08.592225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.526 [2024-10-27 11:32:08.592235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.526 [2024-10-27 11:32:08.592241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.526 [2024-10-27 11:32:08.593194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.526 [2024-10-27 11:32:08.593223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.526 [2024-10-27 11:32:08.593233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.526 [2024-10-27 11:32:08.593239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.526 [2024-10-27 11:32:08.593265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.526 [2024-10-27 11:32:08.593271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.526 [2024-10-27 11:32:08.593278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.526 [2024-10-27 11:32:08.593284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.526 [2024-10-27 11:32:08.593379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.526 [2024-10-27 11:32:08.593388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.526 [2024-10-27 11:32:08.593398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.526 [2024-10-27 11:32:08.593403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.526 [2024-10-27 11:32:08.593430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.526 [2024-10-27 11:32:08.593436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.526 [2024-10-27 11:32:08.593443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.526 [2024-10-27 11:32:08.593449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.526 [2024-10-27 11:32:08.593477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.526 [2024-10-27 11:32:08.593484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.526 [2024-10-27 11:32:08.593494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.526 [2024-10-27 11:32:08.593499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.526 [2024-10-27 11:32:08.593534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.526 [2024-10-27 11:32:08.593541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.526 [2024-10-27 11:32:08.593548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.526 [2024-10-27 11:32:08.593554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.526 [2024-10-27 11:32:08.593655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 209.057 ms, result 0 00:17:24.099 11:32:09 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:24.099 11:32:09 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:24.099 [2024-10-27 11:32:09.163335] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:17:24.099 [2024-10-27 11:32:09.163650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73833 ] 00:17:24.099 [2024-10-27 11:32:09.318796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.360 [2024-10-27 11:32:09.400331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.360 [2024-10-27 11:32:09.605238] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:24.360 [2024-10-27 11:32:09.605290] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:24.622 [2024-10-27 11:32:09.753083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.622 [2024-10-27 11:32:09.753121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:24.622 [2024-10-27 11:32:09.753131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:24.622 [2024-10-27 11:32:09.753137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.622 [2024-10-27 11:32:09.755175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.622 [2024-10-27 11:32:09.755336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.622 [2024-10-27 11:32:09.755349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:17:24.622 [2024-10-27 11:32:09.755355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.622 [2024-10-27 11:32:09.755415] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:24.622 [2024-10-27 11:32:09.755932] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:24.622 [2024-10-27 11:32:09.755949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.622 [2024-10-27 11:32:09.755955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.622 [2024-10-27 11:32:09.755962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:17:24.622 [2024-10-27 11:32:09.755967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.622 [2024-10-27 11:32:09.756957] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:24.622 [2024-10-27 11:32:09.766443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.622 [2024-10-27 11:32:09.766467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:24.622 [2024-10-27 11:32:09.766478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.487 ms 00:17:24.622 [2024-10-27 11:32:09.766485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.622 [2024-10-27 11:32:09.766547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.622 [2024-10-27 11:32:09.766556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:24.622 [2024-10-27 11:32:09.766562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:24.622 [2024-10-27 11:32:09.766568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.622 [2024-10-27 11:32:09.770909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.622 [2024-10-27 11:32:09.770932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.622 [2024-10-27 11:32:09.770940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.312 ms 00:17:24.622 [2024-10-27 11:32:09.770946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.622 [2024-10-27 11:32:09.771019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.622 [2024-10-27 11:32:09.771026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.622 [2024-10-27 11:32:09.771032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:24.622 [2024-10-27 11:32:09.771038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.622 [2024-10-27 11:32:09.771053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.622 [2024-10-27 11:32:09.771059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:24.622 [2024-10-27 11:32:09.771067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:24.622 [2024-10-27 11:32:09.771073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.622 [2024-10-27 11:32:09.771090] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:24.622 [2024-10-27 11:32:09.773823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.622 [2024-10-27 11:32:09.773843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.622 [2024-10-27 11:32:09.773850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.736 ms 00:17:24.622 [2024-10-27 11:32:09.773855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.622 [2024-10-27 11:32:09.773881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.622 [2024-10-27 11:32:09.773888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:24.623 [2024-10-27 11:32:09.773894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:24.623 [2024-10-27 11:32:09.773899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.623 [2024-10-27 11:32:09.773912] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:24.623 [2024-10-27 11:32:09.773925] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:24.623 [2024-10-27 11:32:09.773953] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:24.623 [2024-10-27 11:32:09.773964] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:24.623 [2024-10-27 11:32:09.774042] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:24.623 [2024-10-27 11:32:09.774050] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:24.623 [2024-10-27 11:32:09.774058] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:24.623 [2024-10-27 11:32:09.774065] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:24.623 [2024-10-27 11:32:09.774072] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:24.623 [2024-10-27 11:32:09.774080] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:24.623 [2024-10-27 11:32:09.774086] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:24.623 [2024-10-27 11:32:09.774091] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:24.623 [2024-10-27 11:32:09.774097] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:24.623 [2024-10-27 11:32:09.774103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.623 [2024-10-27 11:32:09.774108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:24.623 [2024-10-27 11:32:09.774114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:17:24.623 [2024-10-27 11:32:09.774119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.623 [2024-10-27 11:32:09.774185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.623 [2024-10-27 11:32:09.774191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:24.623 [2024-10-27 11:32:09.774197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:24.623 [2024-10-27 11:32:09.774204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.623 [2024-10-27 11:32:09.774276] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:24.623 [2024-10-27 11:32:09.774283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:24.623 [2024-10-27 11:32:09.774289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:24.623 [2024-10-27 11:32:09.774304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:24.623 [2024-10-27 11:32:09.774315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:24.623 [2024-10-27 11:32:09.774326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:24.623 [2024-10-27 11:32:09.774331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:24.623 [2024-10-27 11:32:09.774341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:24.623 [2024-10-27 11:32:09.774346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:24.623 [2024-10-27 11:32:09.774351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:24.623 [2024-10-27 11:32:09.774360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:24.623 [2024-10-27 11:32:09.774368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:24.623 [2024-10-27 11:32:09.774373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:24.623 [2024-10-27 11:32:09.774382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:24.623 [2024-10-27 11:32:09.774387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:24.623 [2024-10-27 11:32:09.774397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.623 [2024-10-27 11:32:09.774407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:24.623 [2024-10-27 11:32:09.774412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.623 [2024-10-27 11:32:09.774422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:24.623 [2024-10-27 11:32:09.774427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.623 [2024-10-27 11:32:09.774436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:24.623 [2024-10-27 11:32:09.774441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.623 [2024-10-27 11:32:09.774451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:24.623 [2024-10-27 11:32:09.774456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:24.623 [2024-10-27 11:32:09.774466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:24.623 [2024-10-27 11:32:09.774471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:24.623 [2024-10-27 11:32:09.774475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:24.623 [2024-10-27 11:32:09.774480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:24.623 [2024-10-27 11:32:09.774485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:24.623 [2024-10-27 11:32:09.774489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:24.623 [2024-10-27 11:32:09.774499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:24.623 [2024-10-27 11:32:09.774504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774509] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:24.623 [2024-10-27 11:32:09.774515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:24.623 [2024-10-27 11:32:09.774523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:24.623 [2024-10-27 11:32:09.774529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.623 [2024-10-27 11:32:09.774537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:24.623 [2024-10-27 11:32:09.774542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:24.623 [2024-10-27 11:32:09.774547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:24.623 [2024-10-27 11:32:09.774552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:24.623 [2024-10-27 11:32:09.774557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:24.623 [2024-10-27 11:32:09.774562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:24.623 [2024-10-27 11:32:09.774568] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:24.623 [2024-10-27 11:32:09.774574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:24.623 [2024-10-27 11:32:09.774580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:24.623 [2024-10-27 11:32:09.774586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:24.623 [2024-10-27 11:32:09.774591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:24.623 [2024-10-27 11:32:09.774596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:24.623 [2024-10-27 11:32:09.774602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:24.623 [2024-10-27 11:32:09.774607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:24.623 [2024-10-27 11:32:09.774612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:24.623 [2024-10-27 11:32:09.774617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:24.623 [2024-10-27 11:32:09.774623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:24.623 [2024-10-27 11:32:09.774628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:24.623 [2024-10-27 11:32:09.774633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:24.623 [2024-10-27 11:32:09.774638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:24.623 [2024-10-27 11:32:09.774643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:24.623 [2024-10-27 11:32:09.774648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:24.623 [2024-10-27 11:32:09.774654] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:24.623 [2024-10-27 11:32:09.774660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:24.623 [2024-10-27 11:32:09.774667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:24.623 [2024-10-27 11:32:09.774672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:24.623 [2024-10-27 11:32:09.774678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:24.623 [2024-10-27 11:32:09.774683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:24.623 [2024-10-27 11:32:09.774689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.623 [2024-10-27 11:32:09.774694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:24.623 [2024-10-27 11:32:09.774699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:17:24.623 [2024-10-27 11:32:09.774707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.623 [2024-10-27 11:32:09.795444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.623 [2024-10-27 11:32:09.795467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.624 [2024-10-27 11:32:09.795476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.700 ms 00:17:24.624 [2024-10-27 11:32:09.795482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.624 [2024-10-27 11:32:09.795575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.624 [2024-10-27 11:32:09.795582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:24.624 [2024-10-27 11:32:09.795591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:24.624 [2024-10-27 11:32:09.795597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.624 [2024-10-27 11:32:09.839888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.624 [2024-10-27 11:32:09.839918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.624 [2024-10-27 11:32:09.839927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.275 ms 00:17:24.624 [2024-10-27 11:32:09.839934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.624 [2024-10-27 11:32:09.839993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.624 [2024-10-27 11:32:09.840002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.624 [2024-10-27 11:32:09.840009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:24.624 [2024-10-27 11:32:09.840015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.624 [2024-10-27 11:32:09.840327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.624 [2024-10-27 11:32:09.840344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.624 [2024-10-27 11:32:09.840351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:17:24.624 [2024-10-27 11:32:09.840357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.624 [2024-10-27 11:32:09.840462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.624 [2024-10-27 11:32:09.840476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.624 [2024-10-27 11:32:09.840482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:24.624 [2024-10-27 11:32:09.840488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.624 [2024-10-27 11:32:09.851373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.624 [2024-10-27 11:32:09.851393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.624 [2024-10-27 11:32:09.851401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.869 ms 00:17:24.624 [2024-10-27 11:32:09.851407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.624 [2024-10-27 11:32:09.860952] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:24.624 [2024-10-27 11:32:09.860979] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:24.624 [2024-10-27 11:32:09.860987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.624 [2024-10-27 11:32:09.860994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:24.624 [2024-10-27 11:32:09.861001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.491 ms 00:17:24.624 [2024-10-27 11:32:09.861006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.624 [2024-10-27 11:32:09.879424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.624 [2024-10-27 11:32:09.879455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:24.624 [2024-10-27 11:32:09.879464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.371 ms 00:17:24.624 [2024-10-27 11:32:09.879470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.624 [2024-10-27 11:32:09.888286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.624 [2024-10-27 11:32:09.888311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:24.624 [2024-10-27 11:32:09.888318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.763 ms 00:17:24.624 [2024-10-27 11:32:09.888323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.624 [2024-10-27 11:32:09.896828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.624 [2024-10-27 11:32:09.896848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:24.624 [2024-10-27 11:32:09.896855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.464 ms 00:17:24.624 [2024-10-27 11:32:09.896860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.624 [2024-10-27 11:32:09.897328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.624 [2024-10-27 11:32:09.897343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:24.624 [2024-10-27 11:32:09.897350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:17:24.624 [2024-10-27 11:32:09.897355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.886 [2024-10-27 11:32:09.940958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.886 [2024-10-27 11:32:09.940993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:24.886 [2024-10-27 11:32:09.941002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.585 ms 00:17:24.886 [2024-10-27 11:32:09.941009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.886 [2024-10-27 11:32:09.949031] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:24.886 [2024-10-27 11:32:09.960588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.886 [2024-10-27 11:32:09.960612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:24.886 [2024-10-27 11:32:09.960621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.512 ms 00:17:24.886 [2024-10-27 11:32:09.960628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.886 [2024-10-27 11:32:09.960699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.886 [2024-10-27 11:32:09.960709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:24.886 [2024-10-27 11:32:09.960715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:24.886 [2024-10-27 11:32:09.960721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.886 [2024-10-27 11:32:09.960756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.886 [2024-10-27 11:32:09.960762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:24.886 [2024-10-27 11:32:09.960769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:24.886 [2024-10-27 11:32:09.960775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.886 [2024-10-27 11:32:09.960796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.886 [2024-10-27 11:32:09.960802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:24.886 [2024-10-27 11:32:09.960810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:24.886 [2024-10-27 11:32:09.960815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.886 [2024-10-27 11:32:09.960839] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:24.886 [2024-10-27 11:32:09.960847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.886 [2024-10-27 11:32:09.960853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:24.886 [2024-10-27 11:32:09.960859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:24.886 [2024-10-27 11:32:09.960865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.886 [2024-10-27 11:32:09.978474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.886 [2024-10-27 11:32:09.978501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:24.886 [2024-10-27 11:32:09.978509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.594 ms 00:17:24.886 [2024-10-27 11:32:09.978515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.886 [2024-10-27 11:32:09.978584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.886 [2024-10-27 11:32:09.978592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:24.886 [2024-10-27 11:32:09.978598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:24.886 [2024-10-27 11:32:09.978604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.886 [2024-10-27 11:32:09.979559] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:24.886 [2024-10-27 11:32:09.981959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.230 ms, result 0 00:17:24.886 [2024-10-27 11:32:09.982568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:24.886 [2024-10-27 11:32:09.997465] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.831  [2024-10-27T11:32:12.055Z] Copying: 25/256 [MB] (25 MBps) [2024-10-27T11:32:13.442Z] Copying: 50/256 [MB] (25 MBps) [2024-10-27T11:32:14.015Z] Copying: 69/256 [MB] (19 MBps) [2024-10-27T11:32:15.403Z] Copying: 86/256 [MB] (16 MBps) [2024-10-27T11:32:16.347Z] Copying: 103/256 [MB] (16 MBps) [2024-10-27T11:32:17.291Z] Copying: 117/256 [MB] (14 MBps) [2024-10-27T11:32:18.239Z] Copying: 139/256 [MB] (22 MBps) [2024-10-27T11:32:19.185Z] Copying: 151/256 [MB] (12 MBps) [2024-10-27T11:32:20.129Z] Copying: 162/256 [MB] (10 MBps) [2024-10-27T11:32:21.073Z] Copying: 175/256 [MB] (12 MBps) [2024-10-27T11:32:22.057Z] Copying: 198/256 [MB] (22 MBps) [2024-10-27T11:32:23.056Z] Copying: 212/256 [MB] (13 MBps) [2024-10-27T11:32:24.443Z] Copying: 223/256 [MB] (11 MBps) [2024-10-27T11:32:25.016Z] Copying: 234/256 [MB] (10 MBps) [2024-10-27T11:32:26.404Z] Copying: 244/256 [MB] (10 MBps) [2024-10-27T11:32:26.404Z] Copying: 254/256 [MB] (10 MBps) [2024-10-27T11:32:26.404Z] Copying: 256/256 [MB] (average 15 MBps)[2024-10-27 11:32:26.102318] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.123 [2024-10-27 11:32:26.112736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.112800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:41.123 [2024-10-27 11:32:26.112821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.123 [2024-10-27 11:32:26.112835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-10-27 11:32:26.112867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:41.123 [2024-10-27 11:32:26.115904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.115966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:41.123 [2024-10-27 11:32:26.115983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:17:41.123 [2024-10-27 11:32:26.115996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-10-27 11:32:26.116317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.116342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:41.123 [2024-10-27 11:32:26.116357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:17:41.123 [2024-10-27 11:32:26.116370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-10-27 11:32:26.120088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.120128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:41.123 [2024-10-27 11:32:26.120155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:17:41.123 [2024-10-27 11:32:26.120171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-10-27 11:32:26.127512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.127561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:41.123 [2024-10-27 11:32:26.127578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.305 ms 00:17:41.123 [2024-10-27 11:32:26.127594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-10-27 11:32:26.153763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.153818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:41.123 [2024-10-27 11:32:26.153835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.079 ms 00:17:41.123 [2024-10-27 11:32:26.153846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-10-27 11:32:26.169745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.169800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:41.123 [2024-10-27 11:32:26.169825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.812 ms 00:17:41.123 [2024-10-27 11:32:26.169837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-10-27 11:32:26.170083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.170113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:41.123 [2024-10-27 11:32:26.170127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:41.123 [2024-10-27 11:32:26.170141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-10-27 11:32:26.195796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.195850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:41.123 [2024-10-27 11:32:26.195866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.611 ms 00:17:41.123 [2024-10-27 11:32:26.195877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-10-27 11:32:26.221421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.221476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:41.123 [2024-10-27 11:32:26.221492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.466 ms 00:17:41.123 [2024-10-27 11:32:26.221503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-10-27 11:32:26.245640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.245690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:41.123 [2024-10-27 11:32:26.245706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.077 ms 00:17:41.123 [2024-10-27 11:32:26.245716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-10-27 11:32:26.270655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-10-27 11:32:26.270706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:41.123 [2024-10-27 11:32:26.270721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.840 ms 00:17:41.123 [2024-10-27 11:32:26.270731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.124 [2024-10-27 11:32:26.270812] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:41.124 [2024-10-27 11:32:26.270843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.270859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.270872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.270885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.270898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.270911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.270924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.270943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.270955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.270968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.270981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.270994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.271998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-10-27 11:32:26.272012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-10-27 11:32:26.272029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-10-27 11:32:26.272043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-10-27 11:32:26.272067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-10-27 11:32:26.272082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-10-27 11:32:26.272095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-10-27 11:32:26.272109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-10-27 11:32:26.272121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-10-27 11:32:26.272145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:41.125 [2024-10-27 11:32:26.272159] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 140afd10-86e9-4fa8-ad6f-6665596a139a 00:17:41.125 [2024-10-27 11:32:26.272172] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:41.125 [2024-10-27 11:32:26.272185] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:41.125 [2024-10-27 11:32:26.272198] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:41.125 [2024-10-27 11:32:26.272216] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:41.125 [2024-10-27 11:32:26.272228] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:41.125 [2024-10-27 11:32:26.272241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:41.125 [2024-10-27 11:32:26.272254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:41.125 [2024-10-27 11:32:26.272264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:41.125 [2024-10-27 11:32:26.272276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:41.125 [2024-10-27 11:32:26.272290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.125 [2024-10-27 11:32:26.272316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:41.125 [2024-10-27 11:32:26.272330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:17:41.125 [2024-10-27 11:32:26.272348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-10-27 11:32:26.286072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.125 [2024-10-27 11:32:26.286117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:41.125 [2024-10-27 11:32:26.286133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.669 ms 00:17:41.125 [2024-10-27 11:32:26.286145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-10-27 11:32:26.286599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.125 [2024-10-27 11:32:26.286641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:41.125 [2024-10-27 11:32:26.286655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:17:41.125 [2024-10-27 11:32:26.286667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-10-27 11:32:26.325605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.125 [2024-10-27 11:32:26.325662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.125 [2024-10-27 11:32:26.325677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.125 [2024-10-27 11:32:26.325689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-10-27 11:32:26.325798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.125 [2024-10-27 11:32:26.325819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.125 [2024-10-27 11:32:26.325832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.125 [2024-10-27 11:32:26.325844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-10-27 11:32:26.325918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.125 [2024-10-27 11:32:26.325939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.125 [2024-10-27 11:32:26.325951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.125 [2024-10-27 11:32:26.325962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-10-27 11:32:26.325991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.125 [2024-10-27 11:32:26.326008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.125 [2024-10-27 11:32:26.326025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.125 [2024-10-27 11:32:26.326039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.387 [2024-10-27 11:32:26.411203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.387 [2024-10-27 11:32:26.411268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.387 [2024-10-27 11:32:26.411287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.387 [2024-10-27 11:32:26.411323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.387 [2024-10-27 11:32:26.480975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.387 [2024-10-27 11:32:26.481043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.387 [2024-10-27 11:32:26.481067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.387 [2024-10-27 11:32:26.481078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.387 [2024-10-27 11:32:26.481156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.387 [2024-10-27 11:32:26.481171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.387 [2024-10-27 11:32:26.481184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.387 [2024-10-27 11:32:26.481196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.387 [2024-10-27 11:32:26.481238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.387 [2024-10-27 11:32:26.481252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.387 [2024-10-27 11:32:26.481265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.387 [2024-10-27 11:32:26.481278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.387 [2024-10-27 11:32:26.481443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.387 [2024-10-27 11:32:26.481467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.387 [2024-10-27 11:32:26.481481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.387 [2024-10-27 11:32:26.481494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.387 [2024-10-27 11:32:26.481551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.387 [2024-10-27 11:32:26.481567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:41.387 [2024-10-27 11:32:26.481580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.387 [2024-10-27 11:32:26.481593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.387 [2024-10-27 11:32:26.481661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.387 [2024-10-27 11:32:26.481683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.387 [2024-10-27 11:32:26.481696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.387 [2024-10-27 11:32:26.481709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.387 [2024-10-27 11:32:26.481768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.387 [2024-10-27 11:32:26.481781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.387 [2024-10-27 11:32:26.481789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.387 [2024-10-27 11:32:26.481799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.387 [2024-10-27 11:32:26.481965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.214 ms, result 0 00:17:41.958 00:17:41.958 00:17:41.958 11:32:27 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:41.959 11:32:27 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:42.531 11:32:27 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:42.793 [2024-10-27 11:32:27.870520] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:17:42.794 [2024-10-27 11:32:27.870669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74031 ] 00:17:42.794 [2024-10-27 11:32:28.033752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.056 [2024-10-27 11:32:28.150693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.318 [2024-10-27 11:32:28.442874] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.318 [2024-10-27 11:32:28.442952] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:43.581 [2024-10-27 11:32:28.604707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.581 [2024-10-27 11:32:28.604772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:43.581 [2024-10-27 11:32:28.604789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:43.581 [2024-10-27 11:32:28.604797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.581 [2024-10-27 11:32:28.608022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.581 [2024-10-27 11:32:28.608080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.581 [2024-10-27 11:32:28.608093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:17:43.581 [2024-10-27 11:32:28.608101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.581 [2024-10-27 11:32:28.608246] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:43.581 [2024-10-27 11:32:28.609073] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:43.581 [2024-10-27 11:32:28.609101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.581 [2024-10-27 11:32:28.609110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.581 [2024-10-27 11:32:28.609120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:17:43.581 [2024-10-27 11:32:28.609127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.581 [2024-10-27 11:32:28.611000] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:43.581 [2024-10-27 11:32:28.625161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.581 [2024-10-27 11:32:28.625209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:43.581 [2024-10-27 11:32:28.625229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.163 ms 00:17:43.581 [2024-10-27 11:32:28.625237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.581 [2024-10-27 11:32:28.625364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.581 [2024-10-27 11:32:28.625378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:43.581 [2024-10-27 11:32:28.625388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:43.581 [2024-10-27 11:32:28.625396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.581 [2024-10-27 11:32:28.633252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.581 [2024-10-27 11:32:28.633316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.581 [2024-10-27 11:32:28.633327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.810 ms 00:17:43.581 [2024-10-27 11:32:28.633335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.581 [2024-10-27 11:32:28.633441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.581 [2024-10-27 11:32:28.633452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.581 [2024-10-27 11:32:28.633461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:43.582 [2024-10-27 11:32:28.633470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.582 [2024-10-27 11:32:28.633497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.582 [2024-10-27 11:32:28.633507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:43.582 [2024-10-27 11:32:28.633518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:43.582 [2024-10-27 11:32:28.633526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.582 [2024-10-27 11:32:28.633548] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:43.582 [2024-10-27 11:32:28.637642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.582 [2024-10-27 11:32:28.637684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.582 [2024-10-27 11:32:28.637695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.100 ms 00:17:43.582 [2024-10-27 11:32:28.637703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.582 [2024-10-27 11:32:28.637776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.582 [2024-10-27 11:32:28.637786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:43.582 [2024-10-27 11:32:28.637796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:43.582 [2024-10-27 11:32:28.637804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.582 [2024-10-27 11:32:28.637823] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:43.582 [2024-10-27 11:32:28.637843] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:43.582 [2024-10-27 11:32:28.637884] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:43.582 [2024-10-27 11:32:28.637901] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:43.582 [2024-10-27 11:32:28.638007] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:43.582 [2024-10-27 11:32:28.638018] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:43.582 [2024-10-27 11:32:28.638030] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:43.582 [2024-10-27 11:32:28.638041] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:43.582 [2024-10-27 11:32:28.638050] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:43.582 [2024-10-27 11:32:28.638062] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:43.582 [2024-10-27 11:32:28.638070] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:43.582 [2024-10-27 11:32:28.638078] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:43.582 [2024-10-27 11:32:28.638085] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:43.582 [2024-10-27 11:32:28.638094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.582 [2024-10-27 11:32:28.638101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:43.582 [2024-10-27 11:32:28.638110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:17:43.582 [2024-10-27 11:32:28.638118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.582 [2024-10-27 11:32:28.638207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.582 [2024-10-27 11:32:28.638216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:43.582 [2024-10-27 11:32:28.638224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:43.582 [2024-10-27 11:32:28.638234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.582 [2024-10-27 11:32:28.638354] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:43.582 [2024-10-27 11:32:28.638365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:43.582 [2024-10-27 11:32:28.638374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.582 [2024-10-27 11:32:28.638383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:43.582 [2024-10-27 11:32:28.638398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:43.582 [2024-10-27 11:32:28.638412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:43.582 [2024-10-27 11:32:28.638420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.582 [2024-10-27 11:32:28.638433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:43.582 [2024-10-27 11:32:28.638440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:43.582 [2024-10-27 11:32:28.638447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:43.582 [2024-10-27 11:32:28.638463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:43.582 [2024-10-27 11:32:28.638470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:43.582 [2024-10-27 11:32:28.638479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:43.582 [2024-10-27 11:32:28.638493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:43.582 [2024-10-27 11:32:28.638500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:43.582 [2024-10-27 11:32:28.638514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.582 [2024-10-27 11:32:28.638527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:43.582 [2024-10-27 11:32:28.638534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.582 [2024-10-27 11:32:28.638547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:43.582 [2024-10-27 11:32:28.638554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.582 [2024-10-27 11:32:28.638567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:43.582 [2024-10-27 11:32:28.638573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:43.582 [2024-10-27 11:32:28.638586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:43.582 [2024-10-27 11:32:28.638593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.582 [2024-10-27 11:32:28.638606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:43.582 [2024-10-27 11:32:28.638612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:43.582 [2024-10-27 11:32:28.638619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:43.582 [2024-10-27 11:32:28.638625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:43.582 [2024-10-27 11:32:28.638634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:43.582 [2024-10-27 11:32:28.638641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:43.582 [2024-10-27 11:32:28.638654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:43.582 [2024-10-27 11:32:28.638660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638667] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:43.582 [2024-10-27 11:32:28.638675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:43.582 [2024-10-27 11:32:28.638684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:43.582 [2024-10-27 11:32:28.638692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:43.582 [2024-10-27 11:32:28.638703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:43.582 [2024-10-27 11:32:28.638710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:43.582 [2024-10-27 11:32:28.638717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:43.582 [2024-10-27 11:32:28.638724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:43.582 [2024-10-27 11:32:28.638730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:43.582 [2024-10-27 11:32:28.638737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:43.582 [2024-10-27 11:32:28.638746] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:43.582 [2024-10-27 11:32:28.638756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.582 [2024-10-27 11:32:28.638764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:43.582 [2024-10-27 11:32:28.638771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:43.582 [2024-10-27 11:32:28.638778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:43.582 [2024-10-27 11:32:28.638785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:43.582 [2024-10-27 11:32:28.638792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:43.582 [2024-10-27 11:32:28.638800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:43.582 [2024-10-27 11:32:28.638806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:43.582 [2024-10-27 11:32:28.638813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:43.582 [2024-10-27 11:32:28.638820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:43.582 [2024-10-27 11:32:28.638827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:43.582 [2024-10-27 11:32:28.638834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:43.582 [2024-10-27 11:32:28.638841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:43.582 [2024-10-27 11:32:28.638848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:43.582 [2024-10-27 11:32:28.638856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:43.582 [2024-10-27 11:32:28.638863] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:43.583 [2024-10-27 11:32:28.638871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:43.583 [2024-10-27 11:32:28.638879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:43.583 [2024-10-27 11:32:28.638886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:43.583 [2024-10-27 11:32:28.638893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:43.583 [2024-10-27 11:32:28.638900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:43.583 [2024-10-27 11:32:28.638908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.638916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:43.583 [2024-10-27 11:32:28.638926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:17:43.583 [2024-10-27 11:32:28.638937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.670396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.670441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.583 [2024-10-27 11:32:28.670453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.406 ms 00:17:43.583 [2024-10-27 11:32:28.670460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.670590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.670601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:43.583 [2024-10-27 11:32:28.670614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:43.583 [2024-10-27 11:32:28.670622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.715607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.715664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.583 [2024-10-27 11:32:28.715678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.962 ms 00:17:43.583 [2024-10-27 11:32:28.715688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.715802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.715815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.583 [2024-10-27 11:32:28.715825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:43.583 [2024-10-27 11:32:28.715833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.716399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.716430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.583 [2024-10-27 11:32:28.716441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:17:43.583 [2024-10-27 11:32:28.716449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.716619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.716629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.583 [2024-10-27 11:32:28.716638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:43.583 [2024-10-27 11:32:28.716646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.732690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.732734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.583 [2024-10-27 11:32:28.732746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.022 ms 00:17:43.583 [2024-10-27 11:32:28.732754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.746785] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:43.583 [2024-10-27 11:32:28.746837] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:43.583 [2024-10-27 11:32:28.746850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.746858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:43.583 [2024-10-27 11:32:28.746868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.980 ms 00:17:43.583 [2024-10-27 11:32:28.746876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.772387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.772449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:43.583 [2024-10-27 11:32:28.772462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.418 ms 00:17:43.583 [2024-10-27 11:32:28.772471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.785310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.785360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:43.583 [2024-10-27 11:32:28.785371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.726 ms 00:17:43.583 [2024-10-27 11:32:28.785378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.797623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.797672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:43.583 [2024-10-27 11:32:28.797683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.161 ms 00:17:43.583 [2024-10-27 11:32:28.797690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.583 [2024-10-27 11:32:28.798364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.583 [2024-10-27 11:32:28.798389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:43.583 [2024-10-27 11:32:28.798400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:17:43.583 [2024-10-27 11:32:28.798407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.845 [2024-10-27 11:32:28.864743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.845 [2024-10-27 11:32:28.864812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:43.845 [2024-10-27 11:32:28.864829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.307 ms 00:17:43.845 [2024-10-27 11:32:28.864839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.845 [2024-10-27 11:32:28.875864] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:43.845 [2024-10-27 11:32:28.894971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.845 [2024-10-27 11:32:28.895025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:43.845 [2024-10-27 11:32:28.895039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.015 ms 00:17:43.845 [2024-10-27 11:32:28.895048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.845 [2024-10-27 11:32:28.895166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.845 [2024-10-27 11:32:28.895178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:43.845 [2024-10-27 11:32:28.895188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:43.845 [2024-10-27 11:32:28.895196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.845 [2024-10-27 11:32:28.895254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.845 [2024-10-27 11:32:28.895264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:43.845 [2024-10-27 11:32:28.895272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:43.845 [2024-10-27 11:32:28.895281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.845 [2024-10-27 11:32:28.895338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.845 [2024-10-27 11:32:28.895351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:43.845 [2024-10-27 11:32:28.895360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:43.845 [2024-10-27 11:32:28.895368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.845 [2024-10-27 11:32:28.895401] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:43.845 [2024-10-27 11:32:28.895412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.845 [2024-10-27 11:32:28.895420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:43.845 [2024-10-27 11:32:28.895429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:43.845 [2024-10-27 11:32:28.895437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.845 [2024-10-27 11:32:28.921304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.845 [2024-10-27 11:32:28.921357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:43.845 [2024-10-27 11:32:28.921370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.842 ms 00:17:43.845 [2024-10-27 11:32:28.921378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.845 [2024-10-27 11:32:28.921514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.845 [2024-10-27 11:32:28.921526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:43.845 [2024-10-27 11:32:28.921535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:43.845 [2024-10-27 11:32:28.921544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.845 [2024-10-27 11:32:28.922606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:43.845 [2024-10-27 11:32:28.926135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 317.552 ms, result 0 00:17:43.845 [2024-10-27 11:32:28.927527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:43.845 [2024-10-27 11:32:28.941218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.106  [2024-10-27T11:32:29.387Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-10-27 11:32:29.332937] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.106 [2024-10-27 11:32:29.341838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.106 [2024-10-27 11:32:29.341885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:44.106 [2024-10-27 11:32:29.341899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:44.106 [2024-10-27 11:32:29.341907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.106 [2024-10-27 11:32:29.341937] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:44.106 [2024-10-27 11:32:29.344892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.107 [2024-10-27 11:32:29.344930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:44.107 [2024-10-27 11:32:29.344941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.941 ms 00:17:44.107 [2024-10-27 11:32:29.344949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.107 [2024-10-27 11:32:29.348028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.107 [2024-10-27 11:32:29.348073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:44.107 [2024-10-27 11:32:29.348084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.052 ms 00:17:44.107 [2024-10-27 11:32:29.348092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.107 [2024-10-27 11:32:29.352794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.107 [2024-10-27 11:32:29.352834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:44.107 [2024-10-27 11:32:29.352852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.685 ms 00:17:44.107 [2024-10-27 11:32:29.352860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.107 [2024-10-27 11:32:29.359802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.107 [2024-10-27 11:32:29.359839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:44.107 [2024-10-27 11:32:29.359850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.911 ms 00:17:44.107 [2024-10-27 11:32:29.359858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.370 [2024-10-27 11:32:29.385082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.370 [2024-10-27 11:32:29.385132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:44.370 [2024-10-27 11:32:29.385143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.174 ms 00:17:44.370 [2024-10-27 11:32:29.385150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.370 [2024-10-27 11:32:29.402199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.370 [2024-10-27 11:32:29.402259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:44.370 [2024-10-27 11:32:29.402274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.000 ms 00:17:44.370 [2024-10-27 11:32:29.402287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.370 [2024-10-27 11:32:29.402452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.370 [2024-10-27 11:32:29.402463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:44.370 [2024-10-27 11:32:29.402472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:44.370 [2024-10-27 11:32:29.402480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.370 [2024-10-27 11:32:29.428986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.370 [2024-10-27 11:32:29.429041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:44.370 [2024-10-27 11:32:29.429052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.480 ms 00:17:44.370 [2024-10-27 11:32:29.429059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.370 [2024-10-27 11:32:29.454761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.370 [2024-10-27 11:32:29.454806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:44.370 [2024-10-27 11:32:29.454817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.413 ms 00:17:44.370 [2024-10-27 11:32:29.454823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.370 [2024-10-27 11:32:29.479522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.370 [2024-10-27 11:32:29.479568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:44.370 [2024-10-27 11:32:29.479580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.640 ms 00:17:44.370 [2024-10-27 11:32:29.479587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.370 [2024-10-27 11:32:29.504617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.370 [2024-10-27 11:32:29.504665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:44.370 [2024-10-27 11:32:29.504676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.953 ms 00:17:44.370 [2024-10-27 11:32:29.504684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.370 [2024-10-27 11:32:29.504729] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:44.370 [2024-10-27 11:32:29.504745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:44.370 [2024-10-27 11:32:29.504915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.504922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.504929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.504937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.504944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.504951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.504959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.504968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.504976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.504983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.504991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.504998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:44.371 [2024-10-27 11:32:29.505477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:44.372 [2024-10-27 11:32:29.505496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:44.372 [2024-10-27 11:32:29.505503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:44.372 [2024-10-27 11:32:29.505511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:44.372 [2024-10-27 11:32:29.505518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:44.372 [2024-10-27 11:32:29.505526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:44.372 [2024-10-27 11:32:29.505541] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:44.372 [2024-10-27 11:32:29.505550] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 140afd10-86e9-4fa8-ad6f-6665596a139a 00:17:44.372 [2024-10-27 11:32:29.505558] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:44.372 [2024-10-27 11:32:29.505567] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:44.372 [2024-10-27 11:32:29.505575] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:44.372 [2024-10-27 11:32:29.505583] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:44.372 [2024-10-27 11:32:29.505590] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:44.372 [2024-10-27 11:32:29.505599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:44.372 [2024-10-27 11:32:29.505606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:44.372 [2024-10-27 11:32:29.505613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:44.372 [2024-10-27 11:32:29.505620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:44.372 [2024-10-27 11:32:29.505628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.372 [2024-10-27 11:32:29.505639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:44.372 [2024-10-27 11:32:29.505647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:17:44.372 [2024-10-27 11:32:29.505655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.372 [2024-10-27 11:32:29.518745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.372 [2024-10-27 11:32:29.518785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:44.372 [2024-10-27 11:32:29.518795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.058 ms 00:17:44.372 [2024-10-27 11:32:29.518803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.372 [2024-10-27 11:32:29.519212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.372 [2024-10-27 11:32:29.519222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:44.372 [2024-10-27 11:32:29.519231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:17:44.372 [2024-10-27 11:32:29.519239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.372 [2024-10-27 11:32:29.557706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.372 [2024-10-27 11:32:29.557754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.372 [2024-10-27 11:32:29.557766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.372 [2024-10-27 11:32:29.557774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.372 [2024-10-27 11:32:29.557881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.372 [2024-10-27 11:32:29.557891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.372 [2024-10-27 11:32:29.557900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.372 [2024-10-27 11:32:29.557909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.372 [2024-10-27 11:32:29.557957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.372 [2024-10-27 11:32:29.557967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.372 [2024-10-27 11:32:29.557975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.372 [2024-10-27 11:32:29.557982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.372 [2024-10-27 11:32:29.557999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.372 [2024-10-27 11:32:29.558011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.372 [2024-10-27 11:32:29.558019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.372 [2024-10-27 11:32:29.558025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.372 [2024-10-27 11:32:29.641102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.372 [2024-10-27 11:32:29.641161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.372 [2024-10-27 11:32:29.641173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.372 [2024-10-27 11:32:29.641182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.632 [2024-10-27 11:32:29.709994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.632 [2024-10-27 11:32:29.710057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.632 [2024-10-27 11:32:29.710069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.632 [2024-10-27 11:32:29.710078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.632 [2024-10-27 11:32:29.710156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.632 [2024-10-27 11:32:29.710167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.632 [2024-10-27 11:32:29.710175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.632 [2024-10-27 11:32:29.710184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.632 [2024-10-27 11:32:29.710217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.632 [2024-10-27 11:32:29.710227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.632 [2024-10-27 11:32:29.710239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.632 [2024-10-27 11:32:29.710248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.632 [2024-10-27 11:32:29.710369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.632 [2024-10-27 11:32:29.710381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.632 [2024-10-27 11:32:29.710390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.632 [2024-10-27 11:32:29.710399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.632 [2024-10-27 11:32:29.710433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.632 [2024-10-27 11:32:29.710443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:44.632 [2024-10-27 11:32:29.710452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.632 [2024-10-27 11:32:29.710465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.632 [2024-10-27 11:32:29.710507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.632 [2024-10-27 11:32:29.710517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.632 [2024-10-27 11:32:29.710525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.632 [2024-10-27 11:32:29.710534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.632 [2024-10-27 11:32:29.710583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.632 [2024-10-27 11:32:29.710594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.632 [2024-10-27 11:32:29.710606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.632 [2024-10-27 11:32:29.710614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.632 [2024-10-27 11:32:29.710769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.915 ms, result 0 00:17:45.203 00:17:45.203 00:17:45.203 11:32:30 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74062 00:17:45.203 11:32:30 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:45.203 11:32:30 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74062 00:17:45.203 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74062 ']' 00:17:45.203 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.203 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.203 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.203 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.203 11:32:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:45.464 [2024-10-27 11:32:30.551244] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:17:45.464 [2024-10-27 11:32:30.551406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74062 ] 00:17:45.464 [2024-10-27 11:32:30.714527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.725 [2024-10-27 11:32:30.835062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.297 11:32:31 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.297 11:32:31 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:46.298 11:32:31 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:46.559 [2024-10-27 11:32:31.721740] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.559 [2024-10-27 11:32:31.721820] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.823 [2024-10-27 11:32:31.881180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.881241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:46.823 [2024-10-27 11:32:31.881258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:46.823 [2024-10-27 11:32:31.881267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.884259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.884326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:46.823 [2024-10-27 11:32:31.884339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.970 ms 00:17:46.823 [2024-10-27 11:32:31.884348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.884478] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:46.823 [2024-10-27 11:32:31.885409] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:46.823 [2024-10-27 11:32:31.885460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.885469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:46.823 [2024-10-27 11:32:31.885481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:17:46.823 [2024-10-27 11:32:31.885489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.887218] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:46.823 [2024-10-27 11:32:31.901544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.901600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:46.823 [2024-10-27 11:32:31.901613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.334 ms 00:17:46.823 [2024-10-27 11:32:31.901624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.901748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.901762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:46.823 [2024-10-27 11:32:31.901772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:46.823 [2024-10-27 11:32:31.901782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.910587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.910641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:46.823 [2024-10-27 11:32:31.910652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.753 ms 00:17:46.823 [2024-10-27 11:32:31.910663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.910783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.910797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:46.823 [2024-10-27 11:32:31.910806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:46.823 [2024-10-27 11:32:31.910815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.910842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.910857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:46.823 [2024-10-27 11:32:31.910865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:46.823 [2024-10-27 11:32:31.910875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.910899] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:46.823 [2024-10-27 11:32:31.915138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.915181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:46.823 [2024-10-27 11:32:31.915194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.242 ms 00:17:46.823 [2024-10-27 11:32:31.915202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.915283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.915292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:46.823 [2024-10-27 11:32:31.915321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:46.823 [2024-10-27 11:32:31.915329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.915354] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:46.823 [2024-10-27 11:32:31.915378] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:46.823 [2024-10-27 11:32:31.915422] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:46.823 [2024-10-27 11:32:31.915438] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:46.823 [2024-10-27 11:32:31.915549] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:46.823 [2024-10-27 11:32:31.915560] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:46.823 [2024-10-27 11:32:31.915573] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:46.823 [2024-10-27 11:32:31.915584] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:46.823 [2024-10-27 11:32:31.915598] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:46.823 [2024-10-27 11:32:31.915607] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:46.823 [2024-10-27 11:32:31.915616] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:46.823 [2024-10-27 11:32:31.915624] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:46.823 [2024-10-27 11:32:31.915637] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:46.823 [2024-10-27 11:32:31.915645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.915654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:46.823 [2024-10-27 11:32:31.915662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:17:46.823 [2024-10-27 11:32:31.915673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.915761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.823 [2024-10-27 11:32:31.915779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:46.823 [2024-10-27 11:32:31.915789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:46.823 [2024-10-27 11:32:31.915799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.823 [2024-10-27 11:32:31.915901] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:46.823 [2024-10-27 11:32:31.915914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:46.823 [2024-10-27 11:32:31.915921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.823 [2024-10-27 11:32:31.915932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.823 [2024-10-27 11:32:31.915940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:46.823 [2024-10-27 11:32:31.915948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:46.823 [2024-10-27 11:32:31.915955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:46.823 [2024-10-27 11:32:31.915968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:46.823 [2024-10-27 11:32:31.915976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:46.823 [2024-10-27 11:32:31.915985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.823 [2024-10-27 11:32:31.915992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:46.823 [2024-10-27 11:32:31.916000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:46.823 [2024-10-27 11:32:31.916007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.823 [2024-10-27 11:32:31.916015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:46.823 [2024-10-27 11:32:31.916023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:46.823 [2024-10-27 11:32:31.916032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.823 [2024-10-27 11:32:31.916038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:46.823 [2024-10-27 11:32:31.916047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:46.823 [2024-10-27 11:32:31.916057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.823 [2024-10-27 11:32:31.916069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:46.823 [2024-10-27 11:32:31.916085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:46.823 [2024-10-27 11:32:31.916097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.823 [2024-10-27 11:32:31.916109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:46.823 [2024-10-27 11:32:31.916125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:46.823 [2024-10-27 11:32:31.916136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.823 [2024-10-27 11:32:31.916149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:46.823 [2024-10-27 11:32:31.916160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:46.823 [2024-10-27 11:32:31.916173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.823 [2024-10-27 11:32:31.916183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:46.823 [2024-10-27 11:32:31.916196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:46.823 [2024-10-27 11:32:31.916206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.823 [2024-10-27 11:32:31.916221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:46.823 [2024-10-27 11:32:31.916231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:46.823 [2024-10-27 11:32:31.916244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.824 [2024-10-27 11:32:31.916257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:46.824 [2024-10-27 11:32:31.916270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:46.824 [2024-10-27 11:32:31.916282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.824 [2024-10-27 11:32:31.916311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:46.824 [2024-10-27 11:32:31.916323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:46.824 [2024-10-27 11:32:31.916339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.824 [2024-10-27 11:32:31.916351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:46.824 [2024-10-27 11:32:31.916366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:46.824 [2024-10-27 11:32:31.916376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.824 [2024-10-27 11:32:31.916389] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:46.824 [2024-10-27 11:32:31.916403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:46.824 [2024-10-27 11:32:31.916418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.824 [2024-10-27 11:32:31.916441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.824 [2024-10-27 11:32:31.916456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:46.824 [2024-10-27 11:32:31.916467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:46.824 [2024-10-27 11:32:31.916481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:46.824 [2024-10-27 11:32:31.916493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:46.824 [2024-10-27 11:32:31.916506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:46.824 [2024-10-27 11:32:31.916517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:46.824 [2024-10-27 11:32:31.916533] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:46.824 [2024-10-27 11:32:31.916548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.824 [2024-10-27 11:32:31.916584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:46.824 [2024-10-27 11:32:31.916597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:46.824 [2024-10-27 11:32:31.916610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:46.824 [2024-10-27 11:32:31.916621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:46.824 [2024-10-27 11:32:31.916634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:46.824 [2024-10-27 11:32:31.916645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:46.824 [2024-10-27 11:32:31.916659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:46.824 [2024-10-27 11:32:31.916671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:46.824 [2024-10-27 11:32:31.916685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:46.824 [2024-10-27 11:32:31.916696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:46.824 [2024-10-27 11:32:31.916710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:46.824 [2024-10-27 11:32:31.916722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:46.824 [2024-10-27 11:32:31.916736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:46.824 [2024-10-27 11:32:31.916749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:46.824 [2024-10-27 11:32:31.916763] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:46.824 [2024-10-27 11:32:31.916776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.824 [2024-10-27 11:32:31.916795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:46.824 [2024-10-27 11:32:31.916809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:46.824 [2024-10-27 11:32:31.916825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:46.824 [2024-10-27 11:32:31.916838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:46.824 [2024-10-27 11:32:31.916855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:31.916868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:46.824 [2024-10-27 11:32:31.916883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:17:46.824 [2024-10-27 11:32:31.916895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:31.949572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:31.949626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.824 [2024-10-27 11:32:31.949641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.583 ms 00:17:46.824 [2024-10-27 11:32:31.949650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:31.949791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:31.949805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:46.824 [2024-10-27 11:32:31.949816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:46.824 [2024-10-27 11:32:31.949824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:31.984744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:31.984788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.824 [2024-10-27 11:32:31.984804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.892 ms 00:17:46.824 [2024-10-27 11:32:31.984815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:31.984907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:31.984917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.824 [2024-10-27 11:32:31.984929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:46.824 [2024-10-27 11:32:31.984937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:31.985523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:31.985553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.824 [2024-10-27 11:32:31.985566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:17:46.824 [2024-10-27 11:32:31.985576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:31.985722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:31.985733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.824 [2024-10-27 11:32:31.985744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:17:46.824 [2024-10-27 11:32:31.985752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:32.003597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:32.003639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.824 [2024-10-27 11:32:32.003652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.819 ms 00:17:46.824 [2024-10-27 11:32:32.003660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:32.017930] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:46.824 [2024-10-27 11:32:32.017990] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:46.824 [2024-10-27 11:32:32.018006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:32.018014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:46.824 [2024-10-27 11:32:32.018026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.231 ms 00:17:46.824 [2024-10-27 11:32:32.018033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:32.043939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:32.043990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:46.824 [2024-10-27 11:32:32.044005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.812 ms 00:17:46.824 [2024-10-27 11:32:32.044013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:32.056842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:32.056884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:46.824 [2024-10-27 11:32:32.056901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.734 ms 00:17:46.824 [2024-10-27 11:32:32.056909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:32.069682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:32.069725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:46.824 [2024-10-27 11:32:32.069739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.683 ms 00:17:46.824 [2024-10-27 11:32:32.069747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.824 [2024-10-27 11:32:32.070420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.824 [2024-10-27 11:32:32.070451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:46.824 [2024-10-27 11:32:32.070464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:17:46.824 [2024-10-27 11:32:32.070472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.086 [2024-10-27 11:32:32.149357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.086 [2024-10-27 11:32:32.149423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:47.086 [2024-10-27 11:32:32.149444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.854 ms 00:17:47.086 [2024-10-27 11:32:32.149454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.086 [2024-10-27 11:32:32.160528] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:47.086 [2024-10-27 11:32:32.179241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.086 [2024-10-27 11:32:32.179311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:47.086 [2024-10-27 11:32:32.179324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.678 ms 00:17:47.086 [2024-10-27 11:32:32.179336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.086 [2024-10-27 11:32:32.179433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.087 [2024-10-27 11:32:32.179448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:47.087 [2024-10-27 11:32:32.179458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:47.087 [2024-10-27 11:32:32.179469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.087 [2024-10-27 11:32:32.179529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.087 [2024-10-27 11:32:32.179541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:47.087 [2024-10-27 11:32:32.179549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:47.087 [2024-10-27 11:32:32.179559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.087 [2024-10-27 11:32:32.179589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.087 [2024-10-27 11:32:32.179600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:47.087 [2024-10-27 11:32:32.179608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:47.087 [2024-10-27 11:32:32.179621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.087 [2024-10-27 11:32:32.179656] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:47.087 [2024-10-27 11:32:32.179670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.087 [2024-10-27 11:32:32.179678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:47.087 [2024-10-27 11:32:32.179688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:47.087 [2024-10-27 11:32:32.179700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.087 [2024-10-27 11:32:32.205380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.087 [2024-10-27 11:32:32.205430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:47.087 [2024-10-27 11:32:32.205447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.653 ms 00:17:47.087 [2024-10-27 11:32:32.205455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.087 [2024-10-27 11:32:32.205587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.087 [2024-10-27 11:32:32.205599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:47.087 [2024-10-27 11:32:32.205611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:47.087 [2024-10-27 11:32:32.205619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.087 [2024-10-27 11:32:32.206715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:47.087 [2024-10-27 11:32:32.210225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 325.207 ms, result 0 00:17:47.087 [2024-10-27 11:32:32.212774] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:47.087 Some configs were skipped because the RPC state that can call them passed over. 00:17:47.087 11:32:32 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:47.348 [2024-10-27 11:32:32.447664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.348 [2024-10-27 11:32:32.447735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:47.348 [2024-10-27 11:32:32.447749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.824 ms 00:17:47.348 [2024-10-27 11:32:32.447761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.348 [2024-10-27 11:32:32.447798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.963 ms, result 0 00:17:47.348 true 00:17:47.349 11:32:32 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:47.610 [2024-10-27 11:32:32.655878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.610 [2024-10-27 11:32:32.655936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:47.610 [2024-10-27 11:32:32.655951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.743 ms 00:17:47.610 [2024-10-27 11:32:32.655959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.610 [2024-10-27 11:32:32.655999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.871 ms, result 0 00:17:47.610 true 00:17:47.610 11:32:32 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74062 00:17:47.610 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74062 ']' 00:17:47.610 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74062 00:17:47.610 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:47.610 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:47.610 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74062 00:17:47.610 killing process with pid 74062 00:17:47.610 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:47.610 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:47.610 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74062' 00:17:47.610 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74062 00:17:47.610 11:32:32 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74062 00:17:48.184 [2024-10-27 11:32:33.399582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.399630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:48.184 [2024-10-27 11:32:33.399641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:48.184 [2024-10-27 11:32:33.399648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.399666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:48.184 [2024-10-27 11:32:33.401821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.401847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:48.184 [2024-10-27 11:32:33.401860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:17:48.184 [2024-10-27 11:32:33.401867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.402102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.402115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:48.184 [2024-10-27 11:32:33.402123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:17:48.184 [2024-10-27 11:32:33.402129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.405380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.405406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:48.184 [2024-10-27 11:32:33.405414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:17:48.184 [2024-10-27 11:32:33.405422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.410649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.410673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:48.184 [2024-10-27 11:32:33.410684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.199 ms 00:17:48.184 [2024-10-27 11:32:33.410691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.418128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.418154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:48.184 [2024-10-27 11:32:33.418165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.382 ms 00:17:48.184 [2024-10-27 11:32:33.418175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.424285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.424317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:48.184 [2024-10-27 11:32:33.424327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.079 ms 00:17:48.184 [2024-10-27 11:32:33.424334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.424440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.424448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:48.184 [2024-10-27 11:32:33.424455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:48.184 [2024-10-27 11:32:33.424461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.432276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.432306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:48.184 [2024-10-27 11:32:33.432315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.799 ms 00:17:48.184 [2024-10-27 11:32:33.432321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.439698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.439723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:48.184 [2024-10-27 11:32:33.439734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.338 ms 00:17:48.184 [2024-10-27 11:32:33.439739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.446838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.446861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:48.184 [2024-10-27 11:32:33.446870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.068 ms 00:17:48.184 [2024-10-27 11:32:33.446875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.453731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.184 [2024-10-27 11:32:33.453756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:48.184 [2024-10-27 11:32:33.453764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.806 ms 00:17:48.184 [2024-10-27 11:32:33.453769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.184 [2024-10-27 11:32:33.453796] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:48.184 [2024-10-27 11:32:33.453807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.453999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.454006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.454012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.454019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.454025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:48.184 [2024-10-27 11:32:33.454032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:48.185 [2024-10-27 11:32:33.454452] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:48.185 [2024-10-27 11:32:33.454460] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 140afd10-86e9-4fa8-ad6f-6665596a139a 00:17:48.185 [2024-10-27 11:32:33.454470] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:48.185 [2024-10-27 11:32:33.454479] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:48.185 [2024-10-27 11:32:33.454486] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:48.185 [2024-10-27 11:32:33.454493] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:48.185 [2024-10-27 11:32:33.454498] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:48.185 [2024-10-27 11:32:33.454505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:48.185 [2024-10-27 11:32:33.454511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:48.185 [2024-10-27 11:32:33.454517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:48.185 [2024-10-27 11:32:33.454522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:48.185 [2024-10-27 11:32:33.454528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.185 [2024-10-27 11:32:33.454534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:48.185 [2024-10-27 11:32:33.454541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:17:48.185 [2024-10-27 11:32:33.454547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.447 [2024-10-27 11:32:33.464016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.447 [2024-10-27 11:32:33.464039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:48.447 [2024-10-27 11:32:33.464049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.452 ms 00:17:48.447 [2024-10-27 11:32:33.464055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.447 [2024-10-27 11:32:33.464353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.447 [2024-10-27 11:32:33.464371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:48.447 [2024-10-27 11:32:33.464379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:48.447 [2024-10-27 11:32:33.464385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.447 [2024-10-27 11:32:33.499007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.447 [2024-10-27 11:32:33.499034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.447 [2024-10-27 11:32:33.499043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.447 [2024-10-27 11:32:33.499050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.447 [2024-10-27 11:32:33.499996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.447 [2024-10-27 11:32:33.500020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.447 [2024-10-27 11:32:33.500029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.447 [2024-10-27 11:32:33.500034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.447 [2024-10-27 11:32:33.500071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.447 [2024-10-27 11:32:33.500078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.447 [2024-10-27 11:32:33.500089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.447 [2024-10-27 11:32:33.500094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.447 [2024-10-27 11:32:33.500109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.447 [2024-10-27 11:32:33.500115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.447 [2024-10-27 11:32:33.500122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.447 [2024-10-27 11:32:33.500128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.447 [2024-10-27 11:32:33.559897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.447 [2024-10-27 11:32:33.559926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.447 [2024-10-27 11:32:33.559936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.447 [2024-10-27 11:32:33.559943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.447 [2024-10-27 11:32:33.608504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.447 [2024-10-27 11:32:33.608536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.447 [2024-10-27 11:32:33.608546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.447 [2024-10-27 11:32:33.608552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.447 [2024-10-27 11:32:33.608620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.447 [2024-10-27 11:32:33.608630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.447 [2024-10-27 11:32:33.608640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.447 [2024-10-27 11:32:33.608646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.447 [2024-10-27 11:32:33.608669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.448 [2024-10-27 11:32:33.608675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.448 [2024-10-27 11:32:33.608683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.448 [2024-10-27 11:32:33.608689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.448 [2024-10-27 11:32:33.608756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.448 [2024-10-27 11:32:33.608763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.448 [2024-10-27 11:32:33.608772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.448 [2024-10-27 11:32:33.608778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.448 [2024-10-27 11:32:33.608802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.448 [2024-10-27 11:32:33.608809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:48.448 [2024-10-27 11:32:33.608816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.448 [2024-10-27 11:32:33.608822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.448 [2024-10-27 11:32:33.608852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.448 [2024-10-27 11:32:33.608858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.448 [2024-10-27 11:32:33.608869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.448 [2024-10-27 11:32:33.608874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.448 [2024-10-27 11:32:33.608909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.448 [2024-10-27 11:32:33.608916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.448 [2024-10-27 11:32:33.608923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.448 [2024-10-27 11:32:33.608929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.448 [2024-10-27 11:32:33.609032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 209.431 ms, result 0 00:17:49.020 11:32:34 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:49.020 [2024-10-27 11:32:34.179129] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:17:49.020 [2024-10-27 11:32:34.179250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74120 ] 00:17:49.280 [2024-10-27 11:32:34.334543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.280 [2024-10-27 11:32:34.412736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.543 [2024-10-27 11:32:34.616537] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:49.543 [2024-10-27 11:32:34.616592] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:49.543 [2024-10-27 11:32:34.768496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.768541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:49.543 [2024-10-27 11:32:34.768553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:49.543 [2024-10-27 11:32:34.768568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.771182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.771216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.543 [2024-10-27 11:32:34.771226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:17:49.543 [2024-10-27 11:32:34.771233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.771322] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:49.543 [2024-10-27 11:32:34.771970] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:49.543 [2024-10-27 11:32:34.771995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.772003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.543 [2024-10-27 11:32:34.772012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:17:49.543 [2024-10-27 11:32:34.772019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.773134] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:49.543 [2024-10-27 11:32:34.785774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.785806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:49.543 [2024-10-27 11:32:34.785820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.641 ms 00:17:49.543 [2024-10-27 11:32:34.785827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.785905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.785915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:49.543 [2024-10-27 11:32:34.785924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:49.543 [2024-10-27 11:32:34.785930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.790874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.790908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.543 [2024-10-27 11:32:34.790916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.904 ms 00:17:49.543 [2024-10-27 11:32:34.790924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.791011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.791021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.543 [2024-10-27 11:32:34.791029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:49.543 [2024-10-27 11:32:34.791036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.791058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.791066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:49.543 [2024-10-27 11:32:34.791076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:49.543 [2024-10-27 11:32:34.791083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.791102] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:49.543 [2024-10-27 11:32:34.794525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.794550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.543 [2024-10-27 11:32:34.794560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.427 ms 00:17:49.543 [2024-10-27 11:32:34.794567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.794599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.794607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:49.543 [2024-10-27 11:32:34.794614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:49.543 [2024-10-27 11:32:34.794621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.794637] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:49.543 [2024-10-27 11:32:34.794654] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:49.543 [2024-10-27 11:32:34.794689] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:49.543 [2024-10-27 11:32:34.794703] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:49.543 [2024-10-27 11:32:34.794804] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:49.543 [2024-10-27 11:32:34.794814] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:49.543 [2024-10-27 11:32:34.794824] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:49.543 [2024-10-27 11:32:34.794833] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:49.543 [2024-10-27 11:32:34.794841] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:49.543 [2024-10-27 11:32:34.794851] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:49.543 [2024-10-27 11:32:34.794858] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:49.543 [2024-10-27 11:32:34.794865] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:49.543 [2024-10-27 11:32:34.794872] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:49.543 [2024-10-27 11:32:34.794879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.794887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:49.543 [2024-10-27 11:32:34.794894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:17:49.543 [2024-10-27 11:32:34.794901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.794989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.543 [2024-10-27 11:32:34.795004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:49.543 [2024-10-27 11:32:34.795012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:49.543 [2024-10-27 11:32:34.795021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.543 [2024-10-27 11:32:34.795129] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:49.543 [2024-10-27 11:32:34.795144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:49.543 [2024-10-27 11:32:34.795152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:49.543 [2024-10-27 11:32:34.795159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.543 [2024-10-27 11:32:34.795167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:49.543 [2024-10-27 11:32:34.795174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:49.543 [2024-10-27 11:32:34.795181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:49.543 [2024-10-27 11:32:34.795188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:49.543 [2024-10-27 11:32:34.795195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:49.543 [2024-10-27 11:32:34.795201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:49.543 [2024-10-27 11:32:34.795207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:49.543 [2024-10-27 11:32:34.795214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:49.543 [2024-10-27 11:32:34.795222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:49.543 [2024-10-27 11:32:34.795234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:49.543 [2024-10-27 11:32:34.795241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:49.543 [2024-10-27 11:32:34.795248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.543 [2024-10-27 11:32:34.795254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:49.543 [2024-10-27 11:32:34.795261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:49.543 [2024-10-27 11:32:34.795267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.543 [2024-10-27 11:32:34.795274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:49.543 [2024-10-27 11:32:34.795280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:49.543 [2024-10-27 11:32:34.795287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.543 [2024-10-27 11:32:34.795304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:49.543 [2024-10-27 11:32:34.795311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:49.543 [2024-10-27 11:32:34.795317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.543 [2024-10-27 11:32:34.795324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:49.543 [2024-10-27 11:32:34.795331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:49.543 [2024-10-27 11:32:34.795338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.543 [2024-10-27 11:32:34.795345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:49.543 [2024-10-27 11:32:34.795351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:49.543 [2024-10-27 11:32:34.795358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.543 [2024-10-27 11:32:34.795364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:49.544 [2024-10-27 11:32:34.795371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:49.544 [2024-10-27 11:32:34.795377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:49.544 [2024-10-27 11:32:34.795384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:49.544 [2024-10-27 11:32:34.795390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:49.544 [2024-10-27 11:32:34.795396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:49.544 [2024-10-27 11:32:34.795403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:49.544 [2024-10-27 11:32:34.795409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:49.544 [2024-10-27 11:32:34.795416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.544 [2024-10-27 11:32:34.795422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:49.544 [2024-10-27 11:32:34.795428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:49.544 [2024-10-27 11:32:34.795435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.544 [2024-10-27 11:32:34.795440] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:49.544 [2024-10-27 11:32:34.795449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:49.544 [2024-10-27 11:32:34.795456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:49.544 [2024-10-27 11:32:34.795463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.544 [2024-10-27 11:32:34.795472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:49.544 [2024-10-27 11:32:34.795479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:49.544 [2024-10-27 11:32:34.795485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:49.544 [2024-10-27 11:32:34.795492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:49.544 [2024-10-27 11:32:34.795499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:49.544 [2024-10-27 11:32:34.795505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:49.544 [2024-10-27 11:32:34.795513] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:49.544 [2024-10-27 11:32:34.795521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:49.544 [2024-10-27 11:32:34.795529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:49.544 [2024-10-27 11:32:34.795536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:49.544 [2024-10-27 11:32:34.795543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:49.544 [2024-10-27 11:32:34.795550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:49.544 [2024-10-27 11:32:34.795556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:49.544 [2024-10-27 11:32:34.795563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:49.544 [2024-10-27 11:32:34.795570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:49.544 [2024-10-27 11:32:34.795577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:49.544 [2024-10-27 11:32:34.795584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:49.544 [2024-10-27 11:32:34.795591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:49.544 [2024-10-27 11:32:34.795598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:49.544 [2024-10-27 11:32:34.795605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:49.544 [2024-10-27 11:32:34.795612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:49.544 [2024-10-27 11:32:34.795619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:49.544 [2024-10-27 11:32:34.795626] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:49.544 [2024-10-27 11:32:34.795633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:49.544 [2024-10-27 11:32:34.795641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:49.544 [2024-10-27 11:32:34.795648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:49.544 [2024-10-27 11:32:34.795655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:49.544 [2024-10-27 11:32:34.795662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:49.544 [2024-10-27 11:32:34.795669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.544 [2024-10-27 11:32:34.795676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:49.544 [2024-10-27 11:32:34.795684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:17:49.544 [2024-10-27 11:32:34.795693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.806 [2024-10-27 11:32:34.821681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.806 [2024-10-27 11:32:34.821717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.806 [2024-10-27 11:32:34.821728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.938 ms 00:17:49.806 [2024-10-27 11:32:34.821735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.806 [2024-10-27 11:32:34.821850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.806 [2024-10-27 11:32:34.821860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:49.806 [2024-10-27 11:32:34.821872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:49.806 [2024-10-27 11:32:34.821879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.806 [2024-10-27 11:32:34.864906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.806 [2024-10-27 11:32:34.864947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.806 [2024-10-27 11:32:34.864959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.005 ms 00:17:49.806 [2024-10-27 11:32:34.864967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.806 [2024-10-27 11:32:34.865058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.806 [2024-10-27 11:32:34.865070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.806 [2024-10-27 11:32:34.865079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:49.806 [2024-10-27 11:32:34.865088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.806 [2024-10-27 11:32:34.865454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.806 [2024-10-27 11:32:34.865481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.806 [2024-10-27 11:32:34.865490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:17:49.806 [2024-10-27 11:32:34.865498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.806 [2024-10-27 11:32:34.865630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.806 [2024-10-27 11:32:34.865639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.806 [2024-10-27 11:32:34.865646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:17:49.807 [2024-10-27 11:32:34.865653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:34.879334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:34.879366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.807 [2024-10-27 11:32:34.879376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.661 ms 00:17:49.807 [2024-10-27 11:32:34.879383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:34.892402] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:49.807 [2024-10-27 11:32:34.892441] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:49.807 [2024-10-27 11:32:34.892453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:34.892460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:49.807 [2024-10-27 11:32:34.892469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.971 ms 00:17:49.807 [2024-10-27 11:32:34.892476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:34.925694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:34.925762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:49.807 [2024-10-27 11:32:34.925776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.145 ms 00:17:49.807 [2024-10-27 11:32:34.925784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:34.938016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:34.938062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:49.807 [2024-10-27 11:32:34.938073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.140 ms 00:17:49.807 [2024-10-27 11:32:34.938080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:34.950370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:34.950417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:49.807 [2024-10-27 11:32:34.950428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.207 ms 00:17:49.807 [2024-10-27 11:32:34.950435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:34.951076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:34.951105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:49.807 [2024-10-27 11:32:34.951116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:17:49.807 [2024-10-27 11:32:34.951123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:35.016465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:35.016521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:49.807 [2024-10-27 11:32:35.016536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.316 ms 00:17:49.807 [2024-10-27 11:32:35.016544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:35.027716] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:49.807 [2024-10-27 11:32:35.047153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:35.047209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:49.807 [2024-10-27 11:32:35.047222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.475 ms 00:17:49.807 [2024-10-27 11:32:35.047230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:35.047359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:35.047376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:49.807 [2024-10-27 11:32:35.047387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:49.807 [2024-10-27 11:32:35.047396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:35.047453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:35.047464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:49.807 [2024-10-27 11:32:35.047472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:49.807 [2024-10-27 11:32:35.047481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:35.047508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:35.047517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:49.807 [2024-10-27 11:32:35.047528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:49.807 [2024-10-27 11:32:35.047537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:35.047576] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:49.807 [2024-10-27 11:32:35.047586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:35.047595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:49.807 [2024-10-27 11:32:35.047604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:49.807 [2024-10-27 11:32:35.047613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:35.073853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:35.073911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:49.807 [2024-10-27 11:32:35.073924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.219 ms 00:17:49.807 [2024-10-27 11:32:35.073932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:35.074073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.807 [2024-10-27 11:32:35.074086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:49.807 [2024-10-27 11:32:35.074097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:49.807 [2024-10-27 11:32:35.074105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.807 [2024-10-27 11:32:35.075674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:49.807 [2024-10-27 11:32:35.079197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.802 ms, result 0 00:17:49.807 [2024-10-27 11:32:35.080551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:50.068 [2024-10-27 11:32:35.094074] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.012  [2024-10-27T11:32:37.237Z] Copying: 21/256 [MB] (21 MBps) [2024-10-27T11:32:38.180Z] Copying: 39/256 [MB] (18 MBps) [2024-10-27T11:32:39.564Z] Copying: 60/256 [MB] (20 MBps) [2024-10-27T11:32:40.149Z] Copying: 77/256 [MB] (16 MBps) [2024-10-27T11:32:41.531Z] Copying: 91/256 [MB] (13 MBps) [2024-10-27T11:32:42.472Z] Copying: 108/256 [MB] (17 MBps) [2024-10-27T11:32:43.414Z] Copying: 126/256 [MB] (18 MBps) [2024-10-27T11:32:44.354Z] Copying: 139/256 [MB] (12 MBps) [2024-10-27T11:32:45.295Z] Copying: 160/256 [MB] (21 MBps) [2024-10-27T11:32:46.235Z] Copying: 188/256 [MB] (28 MBps) [2024-10-27T11:32:47.220Z] Copying: 211/256 [MB] (22 MBps) [2024-10-27T11:32:48.164Z] Copying: 228/256 [MB] (16 MBps) [2024-10-27T11:32:48.736Z] Copying: 250/256 [MB] (21 MBps) [2024-10-27T11:32:49.311Z] Copying: 256/256 [MB] (average 18 MBps)[2024-10-27 11:32:49.030012] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:04.030 [2024-10-27 11:32:49.042120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.042176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:04.030 [2024-10-27 11:32:49.042200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:04.030 [2024-10-27 11:32:49.042213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.042254] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:04.030 [2024-10-27 11:32:49.045562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.045618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:04.030 [2024-10-27 11:32:49.045636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:18:04.030 [2024-10-27 11:32:49.045648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.046055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.046088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:04.030 [2024-10-27 11:32:49.046104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:18:04.030 [2024-10-27 11:32:49.046117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.049901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.049941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:04.030 [2024-10-27 11:32:49.049966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.757 ms 00:18:04.030 [2024-10-27 11:32:49.049979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.057144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.057192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:04.030 [2024-10-27 11:32:49.057208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.129 ms 00:18:04.030 [2024-10-27 11:32:49.057220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.082556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.082603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:04.030 [2024-10-27 11:32:49.082622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.226 ms 00:18:04.030 [2024-10-27 11:32:49.082634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.097993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.098045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:04.030 [2024-10-27 11:32:49.098074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.293 ms 00:18:04.030 [2024-10-27 11:32:49.098087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.098317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.098344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:04.030 [2024-10-27 11:32:49.098362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:18:04.030 [2024-10-27 11:32:49.098374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.124030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.124080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:04.030 [2024-10-27 11:32:49.124097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.617 ms 00:18:04.030 [2024-10-27 11:32:49.124108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.149214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.149265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:04.030 [2024-10-27 11:32:49.149283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.044 ms 00:18:04.030 [2024-10-27 11:32:49.149313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.173642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.173701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:04.030 [2024-10-27 11:32:49.173716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.260 ms 00:18:04.030 [2024-10-27 11:32:49.173727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.198266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.030 [2024-10-27 11:32:49.198329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:04.030 [2024-10-27 11:32:49.198347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.437 ms 00:18:04.030 [2024-10-27 11:32:49.198359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.030 [2024-10-27 11:32:49.198419] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:04.030 [2024-10-27 11:32:49.198447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:04.030 [2024-10-27 11:32:49.198691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.198988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:04.031 [2024-10-27 11:32:49.199818] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:04.031 [2024-10-27 11:32:49.199832] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 140afd10-86e9-4fa8-ad6f-6665596a139a 00:18:04.031 [2024-10-27 11:32:49.199846] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:04.031 [2024-10-27 11:32:49.199858] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:04.031 [2024-10-27 11:32:49.199870] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:04.031 [2024-10-27 11:32:49.199883] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:04.031 [2024-10-27 11:32:49.199897] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:04.031 [2024-10-27 11:32:49.199911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:04.031 [2024-10-27 11:32:49.199924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:04.031 [2024-10-27 11:32:49.199936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:04.031 [2024-10-27 11:32:49.199948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:04.031 [2024-10-27 11:32:49.199961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.031 [2024-10-27 11:32:49.199975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:04.031 [2024-10-27 11:32:49.199990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:18:04.031 [2024-10-27 11:32:49.200007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.031 [2024-10-27 11:32:49.214835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.032 [2024-10-27 11:32:49.214883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:04.032 [2024-10-27 11:32:49.214900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.775 ms 00:18:04.032 [2024-10-27 11:32:49.214912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.032 [2024-10-27 11:32:49.215399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.032 [2024-10-27 11:32:49.215438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:04.032 [2024-10-27 11:32:49.215453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:18:04.032 [2024-10-27 11:32:49.215465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.032 [2024-10-27 11:32:49.254127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.032 [2024-10-27 11:32:49.254176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:04.032 [2024-10-27 11:32:49.254193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.032 [2024-10-27 11:32:49.254206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.032 [2024-10-27 11:32:49.254350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.032 [2024-10-27 11:32:49.254372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:04.032 [2024-10-27 11:32:49.254387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.032 [2024-10-27 11:32:49.254401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.032 [2024-10-27 11:32:49.254479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.032 [2024-10-27 11:32:49.254495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:04.032 [2024-10-27 11:32:49.254510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.032 [2024-10-27 11:32:49.254523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.032 [2024-10-27 11:32:49.254553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.032 [2024-10-27 11:32:49.254568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:04.032 [2024-10-27 11:32:49.254586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.032 [2024-10-27 11:32:49.254600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.294 [2024-10-27 11:32:49.338542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.294 [2024-10-27 11:32:49.338596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:04.294 [2024-10-27 11:32:49.338615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.294 [2024-10-27 11:32:49.338626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.294 [2024-10-27 11:32:49.407680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.294 [2024-10-27 11:32:49.407745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:04.294 [2024-10-27 11:32:49.407772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.294 [2024-10-27 11:32:49.407784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.294 [2024-10-27 11:32:49.407887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.294 [2024-10-27 11:32:49.407905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:04.294 [2024-10-27 11:32:49.407918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.294 [2024-10-27 11:32:49.407931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.294 [2024-10-27 11:32:49.407978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.294 [2024-10-27 11:32:49.407993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:04.294 [2024-10-27 11:32:49.408007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.294 [2024-10-27 11:32:49.408020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.294 [2024-10-27 11:32:49.408172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.294 [2024-10-27 11:32:49.408189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:04.294 [2024-10-27 11:32:49.408203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.294 [2024-10-27 11:32:49.408216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.294 [2024-10-27 11:32:49.408266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.294 [2024-10-27 11:32:49.408319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:04.294 [2024-10-27 11:32:49.408336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.294 [2024-10-27 11:32:49.408350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.294 [2024-10-27 11:32:49.408413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.294 [2024-10-27 11:32:49.408439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:04.294 [2024-10-27 11:32:49.408453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.294 [2024-10-27 11:32:49.408466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.294 [2024-10-27 11:32:49.408534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.294 [2024-10-27 11:32:49.408551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:04.294 [2024-10-27 11:32:49.408564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.294 [2024-10-27 11:32:49.408577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.294 [2024-10-27 11:32:49.408820] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.678 ms, result 0 00:18:04.865 00:18:04.865 00:18:05.127 11:32:50 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:05.701 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:05.701 11:32:50 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:05.701 11:32:50 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:05.701 11:32:50 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:05.701 11:32:50 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:05.701 11:32:50 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:05.701 11:32:50 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:05.701 Process with pid 74062 is not found 00:18:05.701 11:32:50 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74062 00:18:05.701 11:32:50 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74062 ']' 00:18:05.701 11:32:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74062 00:18:05.701 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74062) - No such process 00:18:05.701 11:32:50 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74062 is not found' 00:18:05.701 00:18:05.701 real 1m18.376s 00:18:05.701 user 1m39.786s 00:18:05.701 sys 0m5.714s 00:18:05.701 11:32:50 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.701 11:32:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:05.701 ************************************ 00:18:05.701 END TEST ftl_trim 00:18:05.701 ************************************ 00:18:05.701 11:32:50 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:05.701 11:32:50 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:05.701 11:32:50 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.701 11:32:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:05.701 ************************************ 00:18:05.701 START TEST ftl_restore 00:18:05.701 ************************************ 00:18:05.701 11:32:50 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:05.701 * Looking for test storage... 00:18:05.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.701 11:32:50 ftl.ftl_restore -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:18:05.701 11:32:50 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # lcov --version 00:18:05.701 11:32:50 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:18:05.962 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:18:05.962 11:32:51 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.962 11:32:51 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.962 11:32:51 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.963 11:32:51 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:05.963 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.963 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:18:05.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.963 --rc genhtml_branch_coverage=1 00:18:05.963 --rc genhtml_function_coverage=1 00:18:05.963 --rc genhtml_legend=1 00:18:05.963 --rc geninfo_all_blocks=1 00:18:05.963 --rc geninfo_unexecuted_blocks=1 00:18:05.963 00:18:05.963 ' 00:18:05.963 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:18:05.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.963 --rc genhtml_branch_coverage=1 00:18:05.963 --rc genhtml_function_coverage=1 00:18:05.963 --rc genhtml_legend=1 00:18:05.963 --rc geninfo_all_blocks=1 00:18:05.963 --rc geninfo_unexecuted_blocks=1 00:18:05.963 00:18:05.963 ' 00:18:05.963 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:18:05.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.963 --rc genhtml_branch_coverage=1 00:18:05.963 --rc genhtml_function_coverage=1 00:18:05.963 --rc genhtml_legend=1 00:18:05.963 --rc geninfo_all_blocks=1 00:18:05.963 --rc geninfo_unexecuted_blocks=1 00:18:05.963 00:18:05.963 ' 00:18:05.963 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:18:05.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.963 --rc genhtml_branch_coverage=1 00:18:05.963 --rc genhtml_function_coverage=1 00:18:05.963 --rc genhtml_legend=1 00:18:05.963 --rc geninfo_all_blocks=1 00:18:05.963 --rc geninfo_unexecuted_blocks=1 00:18:05.963 00:18:05.963 ' 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.10XM2D97Yc 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74358 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74358 00:18:05.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.963 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74358 ']' 00:18:05.963 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.963 11:32:51 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.963 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.963 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.963 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.963 11:32:51 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:05.963 [2024-10-27 11:32:51.151109] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:18:05.963 [2024-10-27 11:32:51.151498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74358 ] 00:18:06.225 [2024-10-27 11:32:51.312735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.225 [2024-10-27 11:32:51.435225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.169 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:07.169 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:07.169 11:32:52 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:07.169 11:32:52 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:07.169 11:32:52 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:07.169 11:32:52 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:07.169 11:32:52 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:07.169 11:32:52 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:07.169 11:32:52 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:07.169 11:32:52 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:07.169 11:32:52 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:07.169 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:07.169 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:07.169 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:07.169 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:07.169 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:07.431 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:07.431 { 00:18:07.431 "name": "nvme0n1", 00:18:07.431 "aliases": [ 00:18:07.431 "baeb9bc3-5378-4653-87b3-8d5f9dbef981" 00:18:07.431 ], 00:18:07.431 "product_name": "NVMe disk", 00:18:07.431 "block_size": 4096, 00:18:07.431 "num_blocks": 1310720, 00:18:07.431 "uuid": "baeb9bc3-5378-4653-87b3-8d5f9dbef981", 00:18:07.431 "numa_id": -1, 00:18:07.431 "assigned_rate_limits": { 00:18:07.431 "rw_ios_per_sec": 0, 00:18:07.431 "rw_mbytes_per_sec": 0, 00:18:07.431 "r_mbytes_per_sec": 0, 00:18:07.431 "w_mbytes_per_sec": 0 00:18:07.431 }, 00:18:07.431 "claimed": true, 00:18:07.431 "claim_type": "read_many_write_one", 00:18:07.431 "zoned": false, 00:18:07.431 "supported_io_types": { 00:18:07.431 "read": true, 00:18:07.431 "write": true, 00:18:07.431 "unmap": true, 00:18:07.431 "flush": true, 00:18:07.431 "reset": true, 00:18:07.431 "nvme_admin": true, 00:18:07.431 "nvme_io": true, 00:18:07.431 "nvme_io_md": false, 00:18:07.431 "write_zeroes": true, 00:18:07.431 "zcopy": false, 00:18:07.431 "get_zone_info": false, 00:18:07.431 "zone_management": false, 00:18:07.431 "zone_append": false, 00:18:07.431 "compare": true, 00:18:07.431 "compare_and_write": false, 00:18:07.431 "abort": true, 00:18:07.431 "seek_hole": false, 00:18:07.431 "seek_data": false, 00:18:07.431 "copy": true, 00:18:07.431 "nvme_iov_md": false 00:18:07.431 }, 00:18:07.431 "driver_specific": { 00:18:07.431 "nvme": [ 00:18:07.431 { 00:18:07.431 "pci_address": "0000:00:11.0", 00:18:07.431 "trid": { 00:18:07.431 "trtype": "PCIe", 00:18:07.431 "traddr": "0000:00:11.0" 00:18:07.431 }, 00:18:07.431 "ctrlr_data": { 00:18:07.431 "cntlid": 0, 00:18:07.431 "vendor_id": "0x1b36", 00:18:07.431 "model_number": "QEMU NVMe Ctrl", 00:18:07.431 "serial_number": "12341", 00:18:07.431 "firmware_revision": "8.0.0", 00:18:07.431 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:07.431 "oacs": { 00:18:07.431 "security": 0, 00:18:07.431 "format": 1, 00:18:07.431 "firmware": 0, 00:18:07.431 "ns_manage": 1 00:18:07.431 }, 00:18:07.431 "multi_ctrlr": false, 00:18:07.431 "ana_reporting": false 00:18:07.431 }, 00:18:07.431 "vs": { 00:18:07.431 "nvme_version": "1.4" 00:18:07.431 }, 00:18:07.431 "ns_data": { 00:18:07.431 "id": 1, 00:18:07.431 "can_share": false 00:18:07.431 } 00:18:07.431 } 00:18:07.431 ], 00:18:07.431 "mp_policy": "active_passive" 00:18:07.431 } 00:18:07.431 } 00:18:07.431 ]' 00:18:07.431 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:07.431 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:07.431 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:07.692 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:07.692 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:07.692 11:32:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:07.692 11:32:52 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:07.692 11:32:52 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:07.692 11:32:52 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:07.693 11:32:52 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:07.693 11:32:52 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:07.693 11:32:52 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=72335406-f125-4e5a-9fea-98deb3430f78 00:18:07.693 11:32:52 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:07.693 11:32:52 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 72335406-f125-4e5a-9fea-98deb3430f78 00:18:07.951 11:32:53 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:08.211 11:32:53 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=1f60003e-983a-4f8e-bd1b-7b74a528446b 00:18:08.211 11:32:53 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1f60003e-983a-4f8e-bd1b-7b74a528446b 00:18:08.472 11:32:53 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:08.472 11:32:53 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:08.472 11:32:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:08.472 11:32:53 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:08.472 11:32:53 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:08.472 11:32:53 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:08.472 11:32:53 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:08.472 11:32:53 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:08.472 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:08.472 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:08.472 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:08.472 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:08.472 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:08.733 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:08.733 { 00:18:08.733 "name": "c35e21fa-436e-45f1-bfc8-083b34ea7d30", 00:18:08.733 "aliases": [ 00:18:08.733 "lvs/nvme0n1p0" 00:18:08.733 ], 00:18:08.733 "product_name": "Logical Volume", 00:18:08.733 "block_size": 4096, 00:18:08.733 "num_blocks": 26476544, 00:18:08.733 "uuid": "c35e21fa-436e-45f1-bfc8-083b34ea7d30", 00:18:08.733 "assigned_rate_limits": { 00:18:08.733 "rw_ios_per_sec": 0, 00:18:08.733 "rw_mbytes_per_sec": 0, 00:18:08.733 "r_mbytes_per_sec": 0, 00:18:08.733 "w_mbytes_per_sec": 0 00:18:08.733 }, 00:18:08.733 "claimed": false, 00:18:08.733 "zoned": false, 00:18:08.733 "supported_io_types": { 00:18:08.733 "read": true, 00:18:08.733 "write": true, 00:18:08.733 "unmap": true, 00:18:08.733 "flush": false, 00:18:08.733 "reset": true, 00:18:08.733 "nvme_admin": false, 00:18:08.733 "nvme_io": false, 00:18:08.733 "nvme_io_md": false, 00:18:08.733 "write_zeroes": true, 00:18:08.733 "zcopy": false, 00:18:08.733 "get_zone_info": false, 00:18:08.733 "zone_management": false, 00:18:08.733 "zone_append": false, 00:18:08.733 "compare": false, 00:18:08.733 "compare_and_write": false, 00:18:08.733 "abort": false, 00:18:08.733 "seek_hole": true, 00:18:08.733 "seek_data": true, 00:18:08.733 "copy": false, 00:18:08.733 "nvme_iov_md": false 00:18:08.733 }, 00:18:08.733 "driver_specific": { 00:18:08.733 "lvol": { 00:18:08.733 "lvol_store_uuid": "1f60003e-983a-4f8e-bd1b-7b74a528446b", 00:18:08.733 "base_bdev": "nvme0n1", 00:18:08.733 "thin_provision": true, 00:18:08.733 "num_allocated_clusters": 0, 00:18:08.733 "snapshot": false, 00:18:08.733 "clone": false, 00:18:08.733 "esnap_clone": false 00:18:08.733 } 00:18:08.733 } 00:18:08.733 } 00:18:08.733 ]' 00:18:08.733 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:08.733 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:08.733 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:08.733 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:08.733 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:08.733 11:32:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:08.733 11:32:53 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:08.733 11:32:53 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:08.733 11:32:53 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:08.994 11:32:54 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:08.995 11:32:54 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:08.995 11:32:54 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:08.995 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:08.995 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:08.995 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:08.995 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:08.995 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:08.995 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:08.995 { 00:18:08.995 "name": "c35e21fa-436e-45f1-bfc8-083b34ea7d30", 00:18:08.995 "aliases": [ 00:18:08.995 "lvs/nvme0n1p0" 00:18:08.995 ], 00:18:08.995 "product_name": "Logical Volume", 00:18:08.995 "block_size": 4096, 00:18:08.995 "num_blocks": 26476544, 00:18:08.995 "uuid": "c35e21fa-436e-45f1-bfc8-083b34ea7d30", 00:18:08.995 "assigned_rate_limits": { 00:18:08.995 "rw_ios_per_sec": 0, 00:18:08.995 "rw_mbytes_per_sec": 0, 00:18:08.995 "r_mbytes_per_sec": 0, 00:18:08.995 "w_mbytes_per_sec": 0 00:18:08.995 }, 00:18:08.995 "claimed": false, 00:18:08.995 "zoned": false, 00:18:08.995 "supported_io_types": { 00:18:08.995 "read": true, 00:18:08.995 "write": true, 00:18:08.995 "unmap": true, 00:18:08.995 "flush": false, 00:18:08.995 "reset": true, 00:18:08.995 "nvme_admin": false, 00:18:08.995 "nvme_io": false, 00:18:08.995 "nvme_io_md": false, 00:18:08.995 "write_zeroes": true, 00:18:08.995 "zcopy": false, 00:18:08.995 "get_zone_info": false, 00:18:08.995 "zone_management": false, 00:18:08.995 "zone_append": false, 00:18:08.995 "compare": false, 00:18:08.995 "compare_and_write": false, 00:18:08.995 "abort": false, 00:18:08.995 "seek_hole": true, 00:18:08.995 "seek_data": true, 00:18:08.995 "copy": false, 00:18:08.995 "nvme_iov_md": false 00:18:08.995 }, 00:18:08.995 "driver_specific": { 00:18:08.995 "lvol": { 00:18:08.995 "lvol_store_uuid": "1f60003e-983a-4f8e-bd1b-7b74a528446b", 00:18:08.995 "base_bdev": "nvme0n1", 00:18:08.995 "thin_provision": true, 00:18:08.995 "num_allocated_clusters": 0, 00:18:08.995 "snapshot": false, 00:18:08.995 "clone": false, 00:18:08.995 "esnap_clone": false 00:18:08.995 } 00:18:08.995 } 00:18:08.995 } 00:18:08.995 ]' 00:18:08.995 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:09.256 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:09.256 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:09.256 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:09.256 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:09.256 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:09.256 11:32:54 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:09.256 11:32:54 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:09.256 11:32:54 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:09.256 11:32:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:09.256 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:09.256 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:09.256 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:09.256 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:09.256 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c35e21fa-436e-45f1-bfc8-083b34ea7d30 00:18:09.519 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:09.519 { 00:18:09.519 "name": "c35e21fa-436e-45f1-bfc8-083b34ea7d30", 00:18:09.519 "aliases": [ 00:18:09.519 "lvs/nvme0n1p0" 00:18:09.519 ], 00:18:09.519 "product_name": "Logical Volume", 00:18:09.519 "block_size": 4096, 00:18:09.519 "num_blocks": 26476544, 00:18:09.519 "uuid": "c35e21fa-436e-45f1-bfc8-083b34ea7d30", 00:18:09.519 "assigned_rate_limits": { 00:18:09.519 "rw_ios_per_sec": 0, 00:18:09.519 "rw_mbytes_per_sec": 0, 00:18:09.519 "r_mbytes_per_sec": 0, 00:18:09.519 "w_mbytes_per_sec": 0 00:18:09.519 }, 00:18:09.519 "claimed": false, 00:18:09.519 "zoned": false, 00:18:09.519 "supported_io_types": { 00:18:09.519 "read": true, 00:18:09.519 "write": true, 00:18:09.519 "unmap": true, 00:18:09.519 "flush": false, 00:18:09.519 "reset": true, 00:18:09.519 "nvme_admin": false, 00:18:09.519 "nvme_io": false, 00:18:09.519 "nvme_io_md": false, 00:18:09.519 "write_zeroes": true, 00:18:09.519 "zcopy": false, 00:18:09.519 "get_zone_info": false, 00:18:09.519 "zone_management": false, 00:18:09.519 "zone_append": false, 00:18:09.519 "compare": false, 00:18:09.519 "compare_and_write": false, 00:18:09.519 "abort": false, 00:18:09.519 "seek_hole": true, 00:18:09.519 "seek_data": true, 00:18:09.519 "copy": false, 00:18:09.519 "nvme_iov_md": false 00:18:09.519 }, 00:18:09.519 "driver_specific": { 00:18:09.519 "lvol": { 00:18:09.519 "lvol_store_uuid": "1f60003e-983a-4f8e-bd1b-7b74a528446b", 00:18:09.519 "base_bdev": "nvme0n1", 00:18:09.519 "thin_provision": true, 00:18:09.519 "num_allocated_clusters": 0, 00:18:09.519 "snapshot": false, 00:18:09.519 "clone": false, 00:18:09.519 "esnap_clone": false 00:18:09.519 } 00:18:09.519 } 00:18:09.519 } 00:18:09.519 ]' 00:18:09.519 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:09.519 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:09.519 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:09.519 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:09.519 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:09.519 11:32:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:09.519 11:32:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:09.519 11:32:54 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c35e21fa-436e-45f1-bfc8-083b34ea7d30 --l2p_dram_limit 10' 00:18:09.519 11:32:54 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:09.519 11:32:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:09.519 11:32:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:09.519 11:32:54 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:09.519 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:09.519 11:32:54 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c35e21fa-436e-45f1-bfc8-083b34ea7d30 --l2p_dram_limit 10 -c nvc0n1p0 00:18:09.781 [2024-10-27 11:32:54.927905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.781 [2024-10-27 11:32:54.927945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:09.781 [2024-10-27 11:32:54.927959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:09.781 [2024-10-27 11:32:54.927965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.781 [2024-10-27 11:32:54.928011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.781 [2024-10-27 11:32:54.928019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.781 [2024-10-27 11:32:54.928027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:09.781 [2024-10-27 11:32:54.928045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.781 [2024-10-27 11:32:54.928064] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:09.781 [2024-10-27 11:32:54.928705] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:09.781 [2024-10-27 11:32:54.928730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.781 [2024-10-27 11:32:54.928736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.781 [2024-10-27 11:32:54.928744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:18:09.781 [2024-10-27 11:32:54.928750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.781 [2024-10-27 11:32:54.928803] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8a78a63f-1220-4fa3-ad2e-cf0b6390ae8b 00:18:09.781 [2024-10-27 11:32:54.929768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.781 [2024-10-27 11:32:54.929794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:09.782 [2024-10-27 11:32:54.929802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:09.782 [2024-10-27 11:32:54.929810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.782 [2024-10-27 11:32:54.934502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.782 [2024-10-27 11:32:54.934533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.782 [2024-10-27 11:32:54.934540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:18:09.782 [2024-10-27 11:32:54.934549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.782 [2024-10-27 11:32:54.934615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.782 [2024-10-27 11:32:54.934624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.782 [2024-10-27 11:32:54.934631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:09.782 [2024-10-27 11:32:54.934643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.782 [2024-10-27 11:32:54.934677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.782 [2024-10-27 11:32:54.934686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:09.782 [2024-10-27 11:32:54.934692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:09.782 [2024-10-27 11:32:54.934699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.782 [2024-10-27 11:32:54.934717] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:09.782 [2024-10-27 11:32:54.937575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.782 [2024-10-27 11:32:54.937601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.782 [2024-10-27 11:32:54.937610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.862 ms 00:18:09.782 [2024-10-27 11:32:54.937617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.782 [2024-10-27 11:32:54.937644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.782 [2024-10-27 11:32:54.937651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:09.782 [2024-10-27 11:32:54.937659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:09.782 [2024-10-27 11:32:54.937664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.782 [2024-10-27 11:32:54.937682] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:09.782 [2024-10-27 11:32:54.937787] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:09.782 [2024-10-27 11:32:54.937800] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:09.782 [2024-10-27 11:32:54.937808] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:09.782 [2024-10-27 11:32:54.937818] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:09.782 [2024-10-27 11:32:54.937824] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:09.782 [2024-10-27 11:32:54.937832] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:09.782 [2024-10-27 11:32:54.937838] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:09.782 [2024-10-27 11:32:54.937845] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:09.782 [2024-10-27 11:32:54.937850] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:09.782 [2024-10-27 11:32:54.937859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.782 [2024-10-27 11:32:54.937864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:09.782 [2024-10-27 11:32:54.937872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:18:09.782 [2024-10-27 11:32:54.937883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.782 [2024-10-27 11:32:54.937947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.782 [2024-10-27 11:32:54.937953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:09.782 [2024-10-27 11:32:54.937960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:09.782 [2024-10-27 11:32:54.937965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.782 [2024-10-27 11:32:54.938039] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:09.782 [2024-10-27 11:32:54.938048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:09.782 [2024-10-27 11:32:54.938055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:09.782 [2024-10-27 11:32:54.938061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:09.782 [2024-10-27 11:32:54.938073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:09.782 [2024-10-27 11:32:54.938085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:09.782 [2024-10-27 11:32:54.938091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:09.782 [2024-10-27 11:32:54.938102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:09.782 [2024-10-27 11:32:54.938107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:09.782 [2024-10-27 11:32:54.938113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:09.782 [2024-10-27 11:32:54.938118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:09.782 [2024-10-27 11:32:54.938125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:09.782 [2024-10-27 11:32:54.938130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:09.782 [2024-10-27 11:32:54.938143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:09.782 [2024-10-27 11:32:54.938150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:09.782 [2024-10-27 11:32:54.938162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.782 [2024-10-27 11:32:54.938173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:09.782 [2024-10-27 11:32:54.938178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.782 [2024-10-27 11:32:54.938189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:09.782 [2024-10-27 11:32:54.938196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.782 [2024-10-27 11:32:54.938206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:09.782 [2024-10-27 11:32:54.938211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:09.782 [2024-10-27 11:32:54.938222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:09.782 [2024-10-27 11:32:54.938230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:09.782 [2024-10-27 11:32:54.938240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:09.782 [2024-10-27 11:32:54.938245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:09.782 [2024-10-27 11:32:54.938251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:09.782 [2024-10-27 11:32:54.938256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:09.782 [2024-10-27 11:32:54.938262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:09.782 [2024-10-27 11:32:54.938267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:09.782 [2024-10-27 11:32:54.938277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:09.782 [2024-10-27 11:32:54.938283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938288] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:09.782 [2024-10-27 11:32:54.938304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:09.782 [2024-10-27 11:32:54.938310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:09.782 [2024-10-27 11:32:54.938320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.782 [2024-10-27 11:32:54.938326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:09.782 [2024-10-27 11:32:54.938334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:09.782 [2024-10-27 11:32:54.938340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:09.782 [2024-10-27 11:32:54.938346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:09.782 [2024-10-27 11:32:54.938351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:09.782 [2024-10-27 11:32:54.938358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:09.782 [2024-10-27 11:32:54.938365] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:09.782 [2024-10-27 11:32:54.938374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:09.782 [2024-10-27 11:32:54.938380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:09.782 [2024-10-27 11:32:54.938387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:09.782 [2024-10-27 11:32:54.938393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:09.782 [2024-10-27 11:32:54.938399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:09.782 [2024-10-27 11:32:54.938405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:09.782 [2024-10-27 11:32:54.938412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:09.782 [2024-10-27 11:32:54.938417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:09.782 [2024-10-27 11:32:54.938423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:09.782 [2024-10-27 11:32:54.938429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:09.782 [2024-10-27 11:32:54.938436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:09.782 [2024-10-27 11:32:54.938442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:09.783 [2024-10-27 11:32:54.938449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:09.783 [2024-10-27 11:32:54.938454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:09.783 [2024-10-27 11:32:54.938461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:09.783 [2024-10-27 11:32:54.938467] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:09.783 [2024-10-27 11:32:54.938475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:09.783 [2024-10-27 11:32:54.938483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:09.783 [2024-10-27 11:32:54.938490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:09.783 [2024-10-27 11:32:54.938496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:09.783 [2024-10-27 11:32:54.938502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:09.783 [2024-10-27 11:32:54.938508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.783 [2024-10-27 11:32:54.938515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:09.783 [2024-10-27 11:32:54.938520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:18:09.783 [2024-10-27 11:32:54.938527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.783 [2024-10-27 11:32:54.938555] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:09.783 [2024-10-27 11:32:54.938565] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:13.086 [2024-10-27 11:32:57.770996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.086 [2024-10-27 11:32:57.771070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:13.086 [2024-10-27 11:32:57.771085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2832.425 ms 00:18:13.086 [2024-10-27 11:32:57.771096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.086 [2024-10-27 11:32:57.799789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.086 [2024-10-27 11:32:57.799848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.086 [2024-10-27 11:32:57.799859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.463 ms 00:18:13.086 [2024-10-27 11:32:57.799869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.086 [2024-10-27 11:32:57.800001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.086 [2024-10-27 11:32:57.800014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:13.086 [2024-10-27 11:32:57.800023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:13.086 [2024-10-27 11:32:57.800036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.086 [2024-10-27 11:32:57.834374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.086 [2024-10-27 11:32:57.834426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.086 [2024-10-27 11:32:57.834438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.287 ms 00:18:13.086 [2024-10-27 11:32:57.834449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.086 [2024-10-27 11:32:57.834482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.086 [2024-10-27 11:32:57.834494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.086 [2024-10-27 11:32:57.834503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:13.086 [2024-10-27 11:32:57.834516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.086 [2024-10-27 11:32:57.835088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.086 [2024-10-27 11:32:57.835112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.087 [2024-10-27 11:32:57.835122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:18:13.087 [2024-10-27 11:32:57.835132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:57.835242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:57.835254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.087 [2024-10-27 11:32:57.835263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:13.087 [2024-10-27 11:32:57.835277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:57.852406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:57.852453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.087 [2024-10-27 11:32:57.852465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.076 ms 00:18:13.087 [2024-10-27 11:32:57.852477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:57.865605] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:13.087 [2024-10-27 11:32:57.869518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:57.869560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:13.087 [2024-10-27 11:32:57.869573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.955 ms 00:18:13.087 [2024-10-27 11:32:57.869582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:57.973085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:57.973151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:13.087 [2024-10-27 11:32:57.973171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.466 ms 00:18:13.087 [2024-10-27 11:32:57.973182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:57.973422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:57.973435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:13.087 [2024-10-27 11:32:57.973451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:18:13.087 [2024-10-27 11:32:57.973463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:57.999206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:57.999256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:13.087 [2024-10-27 11:32:57.999273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.684 ms 00:18:13.087 [2024-10-27 11:32:57.999281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:58.024284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:58.024338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:13.087 [2024-10-27 11:32:58.024354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.933 ms 00:18:13.087 [2024-10-27 11:32:58.024362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:58.025009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:58.025030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:13.087 [2024-10-27 11:32:58.025060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:18:13.087 [2024-10-27 11:32:58.025068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:58.107862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:58.108058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:13.087 [2024-10-27 11:32:58.108089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.746 ms 00:18:13.087 [2024-10-27 11:32:58.108099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:58.142917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:58.143132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:13.087 [2024-10-27 11:32:58.143167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.319 ms 00:18:13.087 [2024-10-27 11:32:58.143177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:58.169156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:58.169208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:13.087 [2024-10-27 11:32:58.169224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.821 ms 00:18:13.087 [2024-10-27 11:32:58.169232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:58.195803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:58.195854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:13.087 [2024-10-27 11:32:58.195871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.513 ms 00:18:13.087 [2024-10-27 11:32:58.195879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:58.195938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:58.195949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:13.087 [2024-10-27 11:32:58.195965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:13.087 [2024-10-27 11:32:58.195973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:58.196068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.087 [2024-10-27 11:32:58.196079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:13.087 [2024-10-27 11:32:58.196090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:13.087 [2024-10-27 11:32:58.196098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.087 [2024-10-27 11:32:58.197358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3268.921 ms, result 0 00:18:13.087 { 00:18:13.087 "name": "ftl0", 00:18:13.087 "uuid": "8a78a63f-1220-4fa3-ad2e-cf0b6390ae8b" 00:18:13.087 } 00:18:13.087 11:32:58 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:13.087 11:32:58 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:13.350 11:32:58 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:13.350 11:32:58 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:13.612 [2024-10-27 11:32:58.648703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.612 [2024-10-27 11:32:58.648773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:13.612 [2024-10-27 11:32:58.648790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:13.612 [2024-10-27 11:32:58.648810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.612 [2024-10-27 11:32:58.648836] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:13.612 [2024-10-27 11:32:58.651901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.612 [2024-10-27 11:32:58.652111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:13.612 [2024-10-27 11:32:58.652140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:18:13.613 [2024-10-27 11:32:58.652149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.613 [2024-10-27 11:32:58.652479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.613 [2024-10-27 11:32:58.652493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:13.613 [2024-10-27 11:32:58.652505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:18:13.613 [2024-10-27 11:32:58.652518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.613 [2024-10-27 11:32:58.655777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.613 [2024-10-27 11:32:58.655799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:13.613 [2024-10-27 11:32:58.655812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.240 ms 00:18:13.613 [2024-10-27 11:32:58.655820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.613 [2024-10-27 11:32:58.662275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.613 [2024-10-27 11:32:58.662334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:13.613 [2024-10-27 11:32:58.662349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.430 ms 00:18:13.613 [2024-10-27 11:32:58.662357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.613 [2024-10-27 11:32:58.689486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.613 [2024-10-27 11:32:58.689667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:13.613 [2024-10-27 11:32:58.689694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.040 ms 00:18:13.613 [2024-10-27 11:32:58.689703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.613 [2024-10-27 11:32:58.707246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.613 [2024-10-27 11:32:58.707319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:13.613 [2024-10-27 11:32:58.707337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.397 ms 00:18:13.613 [2024-10-27 11:32:58.707346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.613 [2024-10-27 11:32:58.707526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.613 [2024-10-27 11:32:58.707540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:13.613 [2024-10-27 11:32:58.707552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:13.613 [2024-10-27 11:32:58.707560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.613 [2024-10-27 11:32:58.733573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.613 [2024-10-27 11:32:58.733621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:13.613 [2024-10-27 11:32:58.733636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.988 ms 00:18:13.613 [2024-10-27 11:32:58.733643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.613 [2024-10-27 11:32:58.758880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.613 [2024-10-27 11:32:58.758928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:13.613 [2024-10-27 11:32:58.758943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.179 ms 00:18:13.613 [2024-10-27 11:32:58.758950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.613 [2024-10-27 11:32:58.784019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.613 [2024-10-27 11:32:58.784067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:13.613 [2024-10-27 11:32:58.784082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.010 ms 00:18:13.613 [2024-10-27 11:32:58.784090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.613 [2024-10-27 11:32:58.809280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.613 [2024-10-27 11:32:58.809338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:13.613 [2024-10-27 11:32:58.809353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.088 ms 00:18:13.613 [2024-10-27 11:32:58.809361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.613 [2024-10-27 11:32:58.809413] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:13.613 [2024-10-27 11:32:58.809429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:13.613 [2024-10-27 11:32:58.809970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.809977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.809987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.809996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:13.614 [2024-10-27 11:32:58.810367] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:13.614 [2024-10-27 11:32:58.810377] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a78a63f-1220-4fa3-ad2e-cf0b6390ae8b 00:18:13.614 [2024-10-27 11:32:58.810386] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:13.614 [2024-10-27 11:32:58.810401] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:13.614 [2024-10-27 11:32:58.810409] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:13.614 [2024-10-27 11:32:58.810418] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:13.614 [2024-10-27 11:32:58.810429] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:13.614 [2024-10-27 11:32:58.810439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:13.614 [2024-10-27 11:32:58.810447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:13.614 [2024-10-27 11:32:58.810455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:13.614 [2024-10-27 11:32:58.810462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:13.614 [2024-10-27 11:32:58.810472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.614 [2024-10-27 11:32:58.810481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:13.614 [2024-10-27 11:32:58.810492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:18:13.614 [2024-10-27 11:32:58.810500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.614 [2024-10-27 11:32:58.824173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.614 [2024-10-27 11:32:58.824385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:13.614 [2024-10-27 11:32:58.824411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.615 ms 00:18:13.614 [2024-10-27 11:32:58.824419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.614 [2024-10-27 11:32:58.824832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.614 [2024-10-27 11:32:58.824843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:13.614 [2024-10-27 11:32:58.824855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:18:13.614 [2024-10-27 11:32:58.824863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.614 [2024-10-27 11:32:58.871593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.614 [2024-10-27 11:32:58.871642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.614 [2024-10-27 11:32:58.871657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.614 [2024-10-27 11:32:58.871666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.614 [2024-10-27 11:32:58.871742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.614 [2024-10-27 11:32:58.871750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.614 [2024-10-27 11:32:58.871761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.614 [2024-10-27 11:32:58.871770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.614 [2024-10-27 11:32:58.871859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.614 [2024-10-27 11:32:58.871869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.614 [2024-10-27 11:32:58.871880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.614 [2024-10-27 11:32:58.871888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.614 [2024-10-27 11:32:58.871911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.614 [2024-10-27 11:32:58.871920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.614 [2024-10-27 11:32:58.871930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.614 [2024-10-27 11:32:58.871937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.876 [2024-10-27 11:32:58.957410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.876 [2024-10-27 11:32:58.957467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.876 [2024-10-27 11:32:58.957482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.876 [2024-10-27 11:32:58.957491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.876 [2024-10-27 11:32:59.016210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.876 [2024-10-27 11:32:59.016258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.876 [2024-10-27 11:32:59.016269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.876 [2024-10-27 11:32:59.016276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.876 [2024-10-27 11:32:59.016374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.876 [2024-10-27 11:32:59.016386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.876 [2024-10-27 11:32:59.016395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.876 [2024-10-27 11:32:59.016401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.876 [2024-10-27 11:32:59.016442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.876 [2024-10-27 11:32:59.016451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.876 [2024-10-27 11:32:59.016459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.876 [2024-10-27 11:32:59.016466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.876 [2024-10-27 11:32:59.016552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.876 [2024-10-27 11:32:59.016560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.876 [2024-10-27 11:32:59.016571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.876 [2024-10-27 11:32:59.016577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.876 [2024-10-27 11:32:59.016617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.876 [2024-10-27 11:32:59.016624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:13.876 [2024-10-27 11:32:59.016633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.876 [2024-10-27 11:32:59.016639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.876 [2024-10-27 11:32:59.016675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.876 [2024-10-27 11:32:59.016682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.876 [2024-10-27 11:32:59.016692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.876 [2024-10-27 11:32:59.016698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.876 [2024-10-27 11:32:59.016742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.876 [2024-10-27 11:32:59.016750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.876 [2024-10-27 11:32:59.016759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.876 [2024-10-27 11:32:59.016765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.876 [2024-10-27 11:32:59.016884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.156 ms, result 0 00:18:13.876 true 00:18:13.876 11:32:59 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74358 00:18:13.876 11:32:59 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74358 ']' 00:18:13.876 11:32:59 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74358 00:18:13.876 11:32:59 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:13.876 11:32:59 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.876 11:32:59 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74358 00:18:13.876 killing process with pid 74358 00:18:13.876 11:32:59 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:13.876 11:32:59 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:13.876 11:32:59 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74358' 00:18:13.876 11:32:59 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74358 00:18:13.876 11:32:59 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74358 00:18:20.463 11:33:05 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:24.668 262144+0 records in 00:18:24.668 262144+0 records out 00:18:24.668 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.85039 s, 279 MB/s 00:18:24.668 11:33:09 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:26.053 11:33:11 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:26.054 [2024-10-27 11:33:11.317328] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:18:26.054 [2024-10-27 11:33:11.317899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74583 ] 00:18:26.315 [2024-10-27 11:33:11.463733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.315 [2024-10-27 11:33:11.544658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.576 [2024-10-27 11:33:11.749779] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:26.576 [2024-10-27 11:33:11.749827] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:26.838 [2024-10-27 11:33:11.901051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.838 [2024-10-27 11:33:11.901089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:26.838 [2024-10-27 11:33:11.901101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:26.838 [2024-10-27 11:33:11.901107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.838 [2024-10-27 11:33:11.901139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.838 [2024-10-27 11:33:11.901148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:26.838 [2024-10-27 11:33:11.901156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:26.838 [2024-10-27 11:33:11.901161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.838 [2024-10-27 11:33:11.901173] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:26.838 [2024-10-27 11:33:11.901689] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:26.838 [2024-10-27 11:33:11.901703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.838 [2024-10-27 11:33:11.901709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:26.838 [2024-10-27 11:33:11.901718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:18:26.838 [2024-10-27 11:33:11.901723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.838 [2024-10-27 11:33:11.902947] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:26.838 [2024-10-27 11:33:11.912529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.838 [2024-10-27 11:33:11.912558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:26.838 [2024-10-27 11:33:11.912568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.583 ms 00:18:26.838 [2024-10-27 11:33:11.912574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.838 [2024-10-27 11:33:11.912623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.838 [2024-10-27 11:33:11.912633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:26.838 [2024-10-27 11:33:11.912639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:26.838 [2024-10-27 11:33:11.912645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.838 [2024-10-27 11:33:11.917138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.838 [2024-10-27 11:33:11.917164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:26.838 [2024-10-27 11:33:11.917171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.446 ms 00:18:26.838 [2024-10-27 11:33:11.917178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.838 [2024-10-27 11:33:11.917235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.838 [2024-10-27 11:33:11.917242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:26.838 [2024-10-27 11:33:11.917248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:26.838 [2024-10-27 11:33:11.917254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.839 [2024-10-27 11:33:11.917286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.839 [2024-10-27 11:33:11.917307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:26.839 [2024-10-27 11:33:11.917314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:26.839 [2024-10-27 11:33:11.917319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.839 [2024-10-27 11:33:11.917333] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:26.839 [2024-10-27 11:33:11.919907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.839 [2024-10-27 11:33:11.920027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:26.839 [2024-10-27 11:33:11.920040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:18:26.839 [2024-10-27 11:33:11.920049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.839 [2024-10-27 11:33:11.920076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.839 [2024-10-27 11:33:11.920083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:26.839 [2024-10-27 11:33:11.920089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:26.839 [2024-10-27 11:33:11.920094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.839 [2024-10-27 11:33:11.920107] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:26.839 [2024-10-27 11:33:11.920122] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:26.839 [2024-10-27 11:33:11.920148] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:26.839 [2024-10-27 11:33:11.920161] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:26.839 [2024-10-27 11:33:11.920239] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:26.839 [2024-10-27 11:33:11.920247] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:26.839 [2024-10-27 11:33:11.920255] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:26.839 [2024-10-27 11:33:11.920263] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:26.839 [2024-10-27 11:33:11.920270] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:26.839 [2024-10-27 11:33:11.920276] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:26.839 [2024-10-27 11:33:11.920281] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:26.839 [2024-10-27 11:33:11.920287] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:26.839 [2024-10-27 11:33:11.920292] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:26.839 [2024-10-27 11:33:11.920314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.839 [2024-10-27 11:33:11.920320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:26.839 [2024-10-27 11:33:11.920326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:18:26.839 [2024-10-27 11:33:11.920332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.839 [2024-10-27 11:33:11.920395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.839 [2024-10-27 11:33:11.920401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:26.839 [2024-10-27 11:33:11.920406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:26.839 [2024-10-27 11:33:11.920412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.839 [2024-10-27 11:33:11.920487] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:26.839 [2024-10-27 11:33:11.920496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:26.839 [2024-10-27 11:33:11.920502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:26.839 [2024-10-27 11:33:11.920508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:26.839 [2024-10-27 11:33:11.920519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:26.839 [2024-10-27 11:33:11.920529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:26.839 [2024-10-27 11:33:11.920535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:26.839 [2024-10-27 11:33:11.920546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:26.839 [2024-10-27 11:33:11.920551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:26.839 [2024-10-27 11:33:11.920556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:26.839 [2024-10-27 11:33:11.920561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:26.839 [2024-10-27 11:33:11.920567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:26.839 [2024-10-27 11:33:11.920576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:26.839 [2024-10-27 11:33:11.920587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:26.839 [2024-10-27 11:33:11.920591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:26.839 [2024-10-27 11:33:11.920601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.839 [2024-10-27 11:33:11.920619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:26.839 [2024-10-27 11:33:11.920624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.839 [2024-10-27 11:33:11.920634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:26.839 [2024-10-27 11:33:11.920639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.839 [2024-10-27 11:33:11.920649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:26.839 [2024-10-27 11:33:11.920654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.839 [2024-10-27 11:33:11.920664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:26.839 [2024-10-27 11:33:11.920669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:26.839 [2024-10-27 11:33:11.920679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:26.839 [2024-10-27 11:33:11.920684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:26.839 [2024-10-27 11:33:11.920689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:26.839 [2024-10-27 11:33:11.920694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:26.839 [2024-10-27 11:33:11.920700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:26.839 [2024-10-27 11:33:11.920705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:26.839 [2024-10-27 11:33:11.920715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:26.839 [2024-10-27 11:33:11.920720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920725] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:26.839 [2024-10-27 11:33:11.920731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:26.839 [2024-10-27 11:33:11.920736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:26.839 [2024-10-27 11:33:11.920743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.839 [2024-10-27 11:33:11.920749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:26.839 [2024-10-27 11:33:11.920754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:26.839 [2024-10-27 11:33:11.920759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:26.839 [2024-10-27 11:33:11.920765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:26.839 [2024-10-27 11:33:11.920770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:26.839 [2024-10-27 11:33:11.920775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:26.839 [2024-10-27 11:33:11.920781] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:26.839 [2024-10-27 11:33:11.920788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:26.839 [2024-10-27 11:33:11.920794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:26.839 [2024-10-27 11:33:11.920800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:26.839 [2024-10-27 11:33:11.920805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:26.839 [2024-10-27 11:33:11.920810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:26.839 [2024-10-27 11:33:11.920816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:26.839 [2024-10-27 11:33:11.920821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:26.839 [2024-10-27 11:33:11.920827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:26.839 [2024-10-27 11:33:11.920832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:26.839 [2024-10-27 11:33:11.920838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:26.839 [2024-10-27 11:33:11.920843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:26.839 [2024-10-27 11:33:11.920848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:26.839 [2024-10-27 11:33:11.920854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:26.839 [2024-10-27 11:33:11.920859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:26.839 [2024-10-27 11:33:11.920864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:26.839 [2024-10-27 11:33:11.920869] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:26.839 [2024-10-27 11:33:11.920875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:26.840 [2024-10-27 11:33:11.920884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:26.840 [2024-10-27 11:33:11.920889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:26.840 [2024-10-27 11:33:11.920895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:26.840 [2024-10-27 11:33:11.920900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:26.840 [2024-10-27 11:33:11.920905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:11.920910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:26.840 [2024-10-27 11:33:11.920916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:18:26.840 [2024-10-27 11:33:11.920922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:11.941889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:11.941918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:26.840 [2024-10-27 11:33:11.941926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.935 ms 00:18:26.840 [2024-10-27 11:33:11.941932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:11.941997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:11.942006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:26.840 [2024-10-27 11:33:11.942012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:26.840 [2024-10-27 11:33:11.942018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:11.982023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:11.982054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.840 [2024-10-27 11:33:11.982064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.965 ms 00:18:26.840 [2024-10-27 11:33:11.982071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:11.982101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:11.982107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.840 [2024-10-27 11:33:11.982114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:26.840 [2024-10-27 11:33:11.982122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:11.982458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:11.982476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.840 [2024-10-27 11:33:11.982483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:26.840 [2024-10-27 11:33:11.982489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:11.982585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:11.982595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.840 [2024-10-27 11:33:11.982602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:26.840 [2024-10-27 11:33:11.982608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:11.993122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:11.993148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.840 [2024-10-27 11:33:11.993156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.496 ms 00:18:26.840 [2024-10-27 11:33:11.993162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.002985] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:26.840 [2024-10-27 11:33:12.003016] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:26.840 [2024-10-27 11:33:12.003025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.003032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:26.840 [2024-10-27 11:33:12.003039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.781 ms 00:18:26.840 [2024-10-27 11:33:12.003044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.021236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.021361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:26.840 [2024-10-27 11:33:12.021379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.161 ms 00:18:26.840 [2024-10-27 11:33:12.021385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.030475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.030507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:26.840 [2024-10-27 11:33:12.030515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.064 ms 00:18:26.840 [2024-10-27 11:33:12.030520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.039146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.039170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:26.840 [2024-10-27 11:33:12.039178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.599 ms 00:18:26.840 [2024-10-27 11:33:12.039183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.039642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.039658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:26.840 [2024-10-27 11:33:12.039666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:18:26.840 [2024-10-27 11:33:12.039672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.082841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.082880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:26.840 [2024-10-27 11:33:12.082890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.156 ms 00:18:26.840 [2024-10-27 11:33:12.082897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.090641] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:26.840 [2024-10-27 11:33:12.092332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.092445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:26.840 [2024-10-27 11:33:12.092458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.397 ms 00:18:26.840 [2024-10-27 11:33:12.092465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.092518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.092526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:26.840 [2024-10-27 11:33:12.092533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:26.840 [2024-10-27 11:33:12.092539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.092580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.092590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:26.840 [2024-10-27 11:33:12.092596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:26.840 [2024-10-27 11:33:12.092602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.092623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.092630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:26.840 [2024-10-27 11:33:12.092636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:26.840 [2024-10-27 11:33:12.092641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.092665] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:26.840 [2024-10-27 11:33:12.092672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.092679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:26.840 [2024-10-27 11:33:12.092687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:26.840 [2024-10-27 11:33:12.092692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.110462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.110491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:26.840 [2024-10-27 11:33:12.110500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.757 ms 00:18:26.840 [2024-10-27 11:33:12.110507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.110563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.840 [2024-10-27 11:33:12.110570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:26.840 [2024-10-27 11:33:12.110577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:26.840 [2024-10-27 11:33:12.110583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.840 [2024-10-27 11:33:12.111309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 209.922 ms, result 0 00:18:28.222  [2024-10-27T11:33:14.446Z] Copying: 26/1024 [MB] (26 MBps) [2024-10-27T11:33:15.390Z] Copying: 48/1024 [MB] (22 MBps) [2024-10-27T11:33:16.337Z] Copying: 66/1024 [MB] (17 MBps) [2024-10-27T11:33:17.278Z] Copying: 86/1024 [MB] (20 MBps) [2024-10-27T11:33:18.222Z] Copying: 123/1024 [MB] (36 MBps) [2024-10-27T11:33:19.162Z] Copying: 137/1024 [MB] (13 MBps) [2024-10-27T11:33:20.548Z] Copying: 152/1024 [MB] (15 MBps) [2024-10-27T11:33:21.491Z] Copying: 172/1024 [MB] (20 MBps) [2024-10-27T11:33:22.433Z] Copying: 186/1024 [MB] (14 MBps) [2024-10-27T11:33:23.376Z] Copying: 202/1024 [MB] (15 MBps) [2024-10-27T11:33:24.317Z] Copying: 222/1024 [MB] (20 MBps) [2024-10-27T11:33:25.387Z] Copying: 242/1024 [MB] (19 MBps) [2024-10-27T11:33:26.330Z] Copying: 265/1024 [MB] (22 MBps) [2024-10-27T11:33:27.273Z] Copying: 284/1024 [MB] (19 MBps) [2024-10-27T11:33:28.216Z] Copying: 303/1024 [MB] (18 MBps) [2024-10-27T11:33:29.162Z] Copying: 318/1024 [MB] (15 MBps) [2024-10-27T11:33:30.564Z] Copying: 341/1024 [MB] (22 MBps) [2024-10-27T11:33:31.138Z] Copying: 361/1024 [MB] (20 MBps) [2024-10-27T11:33:32.523Z] Copying: 373/1024 [MB] (11 MBps) [2024-10-27T11:33:33.466Z] Copying: 384/1024 [MB] (10 MBps) [2024-10-27T11:33:34.410Z] Copying: 394/1024 [MB] (10 MBps) [2024-10-27T11:33:35.353Z] Copying: 405/1024 [MB] (10 MBps) [2024-10-27T11:33:36.297Z] Copying: 417/1024 [MB] (12 MBps) [2024-10-27T11:33:37.242Z] Copying: 434/1024 [MB] (16 MBps) [2024-10-27T11:33:38.186Z] Copying: 449/1024 [MB] (15 MBps) [2024-10-27T11:33:39.130Z] Copying: 461/1024 [MB] (11 MBps) [2024-10-27T11:33:40.517Z] Copying: 472/1024 [MB] (10 MBps) [2024-10-27T11:33:41.461Z] Copying: 522/1024 [MB] (50 MBps) [2024-10-27T11:33:42.404Z] Copying: 549/1024 [MB] (26 MBps) [2024-10-27T11:33:43.348Z] Copying: 575/1024 [MB] (25 MBps) [2024-10-27T11:33:44.291Z] Copying: 586/1024 [MB] (10 MBps) [2024-10-27T11:33:45.235Z] Copying: 596/1024 [MB] (10 MBps) [2024-10-27T11:33:46.178Z] Copying: 607/1024 [MB] (11 MBps) [2024-10-27T11:33:47.565Z] Copying: 632/1024 [MB] (24 MBps) [2024-10-27T11:33:48.138Z] Copying: 650/1024 [MB] (18 MBps) [2024-10-27T11:33:49.525Z] Copying: 668/1024 [MB] (17 MBps) [2024-10-27T11:33:50.466Z] Copying: 679/1024 [MB] (11 MBps) [2024-10-27T11:33:51.410Z] Copying: 699/1024 [MB] (20 MBps) [2024-10-27T11:33:52.354Z] Copying: 719/1024 [MB] (20 MBps) [2024-10-27T11:33:53.298Z] Copying: 736/1024 [MB] (17 MBps) [2024-10-27T11:33:54.242Z] Copying: 755/1024 [MB] (19 MBps) [2024-10-27T11:33:55.185Z] Copying: 767/1024 [MB] (11 MBps) [2024-10-27T11:33:56.176Z] Copying: 777/1024 [MB] (10 MBps) [2024-10-27T11:33:57.129Z] Copying: 788/1024 [MB] (10 MBps) [2024-10-27T11:33:58.515Z] Copying: 801/1024 [MB] (13 MBps) [2024-10-27T11:33:59.459Z] Copying: 824/1024 [MB] (23 MBps) [2024-10-27T11:34:00.402Z] Copying: 842/1024 [MB] (17 MBps) [2024-10-27T11:34:01.349Z] Copying: 861/1024 [MB] (18 MBps) [2024-10-27T11:34:02.294Z] Copying: 879/1024 [MB] (18 MBps) [2024-10-27T11:34:03.237Z] Copying: 896/1024 [MB] (16 MBps) [2024-10-27T11:34:04.175Z] Copying: 913/1024 [MB] (17 MBps) [2024-10-27T11:34:05.557Z] Copying: 923/1024 [MB] (10 MBps) [2024-10-27T11:34:06.130Z] Copying: 934/1024 [MB] (10 MBps) [2024-10-27T11:34:07.518Z] Copying: 944/1024 [MB] (10 MBps) [2024-10-27T11:34:08.462Z] Copying: 955/1024 [MB] (10 MBps) [2024-10-27T11:34:09.405Z] Copying: 965/1024 [MB] (10 MBps) [2024-10-27T11:34:10.348Z] Copying: 975/1024 [MB] (10 MBps) [2024-10-27T11:34:11.291Z] Copying: 985/1024 [MB] (10 MBps) [2024-10-27T11:34:12.236Z] Copying: 995/1024 [MB] (10 MBps) [2024-10-27T11:34:12.498Z] Copying: 1005/1024 [MB] (10 MBps) [2024-10-27T11:34:12.498Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-27 11:34:12.476793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.217 [2024-10-27 11:34:12.476832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:27.217 [2024-10-27 11:34:12.476845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:27.217 [2024-10-27 11:34:12.476853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.217 [2024-10-27 11:34:12.476872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.217 [2024-10-27 11:34:12.479449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.217 [2024-10-27 11:34:12.479477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:27.217 [2024-10-27 11:34:12.479488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.564 ms 00:19:27.217 [2024-10-27 11:34:12.479497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.217 [2024-10-27 11:34:12.480820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.217 [2024-10-27 11:34:12.480851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:27.217 [2024-10-27 11:34:12.480860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.300 ms 00:19:27.217 [2024-10-27 11:34:12.480867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.479 [2024-10-27 11:34:12.497390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.479 [2024-10-27 11:34:12.497430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:27.479 [2024-10-27 11:34:12.497440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.501 ms 00:19:27.479 [2024-10-27 11:34:12.497448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.479 [2024-10-27 11:34:12.503533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.479 [2024-10-27 11:34:12.503563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:27.479 [2024-10-27 11:34:12.503573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.059 ms 00:19:27.479 [2024-10-27 11:34:12.503580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.479 [2024-10-27 11:34:12.527680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.479 [2024-10-27 11:34:12.527712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:27.479 [2024-10-27 11:34:12.527723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.043 ms 00:19:27.479 [2024-10-27 11:34:12.527730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.479 [2024-10-27 11:34:12.541457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.479 [2024-10-27 11:34:12.541487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:27.479 [2024-10-27 11:34:12.541498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.696 ms 00:19:27.479 [2024-10-27 11:34:12.541506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.479 [2024-10-27 11:34:12.541624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.479 [2024-10-27 11:34:12.541634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:27.479 [2024-10-27 11:34:12.541643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:27.479 [2024-10-27 11:34:12.541654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.480 [2024-10-27 11:34:12.565086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.480 [2024-10-27 11:34:12.565116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:27.480 [2024-10-27 11:34:12.565126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.419 ms 00:19:27.480 [2024-10-27 11:34:12.565133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.480 [2024-10-27 11:34:12.588323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.480 [2024-10-27 11:34:12.588354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:27.480 [2024-10-27 11:34:12.588372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.160 ms 00:19:27.480 [2024-10-27 11:34:12.588378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.480 [2024-10-27 11:34:12.611171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.480 [2024-10-27 11:34:12.611327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:27.480 [2024-10-27 11:34:12.611344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.761 ms 00:19:27.480 [2024-10-27 11:34:12.611351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.480 [2024-10-27 11:34:12.634776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.480 [2024-10-27 11:34:12.634921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:27.480 [2024-10-27 11:34:12.634939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.136 ms 00:19:27.480 [2024-10-27 11:34:12.634947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.480 [2024-10-27 11:34:12.635199] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:27.480 [2024-10-27 11:34:12.635236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:27.480 [2024-10-27 11:34:12.635708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.635988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:27.481 [2024-10-27 11:34:12.636003] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:27.481 [2024-10-27 11:34:12.636017] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a78a63f-1220-4fa3-ad2e-cf0b6390ae8b 00:19:27.481 [2024-10-27 11:34:12.636025] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:27.481 [2024-10-27 11:34:12.636034] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:27.481 [2024-10-27 11:34:12.636041] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:27.481 [2024-10-27 11:34:12.636048] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:27.481 [2024-10-27 11:34:12.636055] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:27.481 [2024-10-27 11:34:12.636063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:27.481 [2024-10-27 11:34:12.636070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:27.481 [2024-10-27 11:34:12.636082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:27.481 [2024-10-27 11:34:12.636088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:27.481 [2024-10-27 11:34:12.636096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.481 [2024-10-27 11:34:12.636104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:27.481 [2024-10-27 11:34:12.636113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:19:27.481 [2024-10-27 11:34:12.636120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.481 [2024-10-27 11:34:12.648747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.481 [2024-10-27 11:34:12.648780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:27.481 [2024-10-27 11:34:12.648790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.607 ms 00:19:27.481 [2024-10-27 11:34:12.648797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.481 [2024-10-27 11:34:12.649158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.481 [2024-10-27 11:34:12.649167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:27.481 [2024-10-27 11:34:12.649175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:19:27.481 [2024-10-27 11:34:12.649182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.481 [2024-10-27 11:34:12.683208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.481 [2024-10-27 11:34:12.683244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.481 [2024-10-27 11:34:12.683254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.481 [2024-10-27 11:34:12.683262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.481 [2024-10-27 11:34:12.683334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.481 [2024-10-27 11:34:12.683344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.481 [2024-10-27 11:34:12.683352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.481 [2024-10-27 11:34:12.683359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.481 [2024-10-27 11:34:12.683416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.481 [2024-10-27 11:34:12.683425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.481 [2024-10-27 11:34:12.683435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.481 [2024-10-27 11:34:12.683442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.481 [2024-10-27 11:34:12.683457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.481 [2024-10-27 11:34:12.683465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.481 [2024-10-27 11:34:12.683472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.481 [2024-10-27 11:34:12.683479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.744 [2024-10-27 11:34:12.764540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.744 [2024-10-27 11:34:12.764809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.744 [2024-10-27 11:34:12.764830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.744 [2024-10-27 11:34:12.764839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.744 [2024-10-27 11:34:12.834178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.744 [2024-10-27 11:34:12.834232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.744 [2024-10-27 11:34:12.834245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.744 [2024-10-27 11:34:12.834254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.744 [2024-10-27 11:34:12.834371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.744 [2024-10-27 11:34:12.834386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.744 [2024-10-27 11:34:12.834395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.744 [2024-10-27 11:34:12.834403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.744 [2024-10-27 11:34:12.834440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.744 [2024-10-27 11:34:12.834450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.744 [2024-10-27 11:34:12.834460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.744 [2024-10-27 11:34:12.834468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.744 [2024-10-27 11:34:12.834567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.744 [2024-10-27 11:34:12.834579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.744 [2024-10-27 11:34:12.834591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.744 [2024-10-27 11:34:12.834599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.744 [2024-10-27 11:34:12.834631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.744 [2024-10-27 11:34:12.834640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:27.744 [2024-10-27 11:34:12.834650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.744 [2024-10-27 11:34:12.834657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.744 [2024-10-27 11:34:12.834700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.744 [2024-10-27 11:34:12.834709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.744 [2024-10-27 11:34:12.834721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.744 [2024-10-27 11:34:12.834729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.744 [2024-10-27 11:34:12.834777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.744 [2024-10-27 11:34:12.834787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.744 [2024-10-27 11:34:12.834796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.744 [2024-10-27 11:34:12.834803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.744 [2024-10-27 11:34:12.834936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 358.101 ms, result 0 00:19:28.317 00:19:28.317 00:19:28.578 11:34:13 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:28.578 [2024-10-27 11:34:13.670487] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:19:28.578 [2024-10-27 11:34:13.670635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75227 ] 00:19:28.578 [2024-10-27 11:34:13.824394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.839 [2024-10-27 11:34:13.939827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.100 [2024-10-27 11:34:14.229763] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:29.100 [2024-10-27 11:34:14.229831] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:29.363 [2024-10-27 11:34:14.390403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.363 [2024-10-27 11:34:14.390461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:29.363 [2024-10-27 11:34:14.390480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:29.363 [2024-10-27 11:34:14.390488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.363 [2024-10-27 11:34:14.390539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.363 [2024-10-27 11:34:14.390550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.363 [2024-10-27 11:34:14.390562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:29.363 [2024-10-27 11:34:14.390570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.363 [2024-10-27 11:34:14.390591] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:29.363 [2024-10-27 11:34:14.391344] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:29.363 [2024-10-27 11:34:14.391366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.363 [2024-10-27 11:34:14.391375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.363 [2024-10-27 11:34:14.391384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:19:29.363 [2024-10-27 11:34:14.391392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.363 [2024-10-27 11:34:14.393137] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:29.363 [2024-10-27 11:34:14.407212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.363 [2024-10-27 11:34:14.407262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:29.363 [2024-10-27 11:34:14.407276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.076 ms 00:19:29.363 [2024-10-27 11:34:14.407284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.363 [2024-10-27 11:34:14.407376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.363 [2024-10-27 11:34:14.407390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:29.363 [2024-10-27 11:34:14.407400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:29.363 [2024-10-27 11:34:14.407407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.363 [2024-10-27 11:34:14.415249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.363 [2024-10-27 11:34:14.415291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.363 [2024-10-27 11:34:14.415321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.764 ms 00:19:29.363 [2024-10-27 11:34:14.415330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.363 [2024-10-27 11:34:14.415414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.363 [2024-10-27 11:34:14.415424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.363 [2024-10-27 11:34:14.415432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:29.363 [2024-10-27 11:34:14.415441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.363 [2024-10-27 11:34:14.415483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.363 [2024-10-27 11:34:14.415493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:29.364 [2024-10-27 11:34:14.415502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:29.364 [2024-10-27 11:34:14.415510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.364 [2024-10-27 11:34:14.415533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:29.364 [2024-10-27 11:34:14.419712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.364 [2024-10-27 11:34:14.419751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.364 [2024-10-27 11:34:14.419761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.184 ms 00:19:29.364 [2024-10-27 11:34:14.419773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.364 [2024-10-27 11:34:14.419807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.364 [2024-10-27 11:34:14.419816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:29.364 [2024-10-27 11:34:14.419825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:29.364 [2024-10-27 11:34:14.419833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.364 [2024-10-27 11:34:14.419883] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:29.364 [2024-10-27 11:34:14.419905] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:29.364 [2024-10-27 11:34:14.419942] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:29.364 [2024-10-27 11:34:14.419962] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:29.364 [2024-10-27 11:34:14.420067] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:29.364 [2024-10-27 11:34:14.420079] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:29.364 [2024-10-27 11:34:14.420090] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:29.364 [2024-10-27 11:34:14.420101] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:29.364 [2024-10-27 11:34:14.420111] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:29.364 [2024-10-27 11:34:14.420119] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:29.364 [2024-10-27 11:34:14.420127] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:29.364 [2024-10-27 11:34:14.420135] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:29.364 [2024-10-27 11:34:14.420143] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:29.364 [2024-10-27 11:34:14.420154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.364 [2024-10-27 11:34:14.420162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:29.364 [2024-10-27 11:34:14.420170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:19:29.364 [2024-10-27 11:34:14.420178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.364 [2024-10-27 11:34:14.420260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.364 [2024-10-27 11:34:14.420269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:29.364 [2024-10-27 11:34:14.420276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:29.364 [2024-10-27 11:34:14.420284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.364 [2024-10-27 11:34:14.420424] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:29.364 [2024-10-27 11:34:14.420439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:29.364 [2024-10-27 11:34:14.420449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.364 [2024-10-27 11:34:14.420457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:29.364 [2024-10-27 11:34:14.420472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:29.364 [2024-10-27 11:34:14.420486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:29.364 [2024-10-27 11:34:14.420493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.364 [2024-10-27 11:34:14.420508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:29.364 [2024-10-27 11:34:14.420515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:29.364 [2024-10-27 11:34:14.420521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:29.364 [2024-10-27 11:34:14.420528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:29.364 [2024-10-27 11:34:14.420538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:29.364 [2024-10-27 11:34:14.420551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:29.364 [2024-10-27 11:34:14.420565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:29.364 [2024-10-27 11:34:14.420571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:29.364 [2024-10-27 11:34:14.420585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.364 [2024-10-27 11:34:14.420600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:29.364 [2024-10-27 11:34:14.420607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.364 [2024-10-27 11:34:14.420620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:29.364 [2024-10-27 11:34:14.420626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.364 [2024-10-27 11:34:14.420639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:29.364 [2024-10-27 11:34:14.420646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:29.364 [2024-10-27 11:34:14.420674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:29.364 [2024-10-27 11:34:14.420681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.364 [2024-10-27 11:34:14.420694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:29.364 [2024-10-27 11:34:14.420701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:29.364 [2024-10-27 11:34:14.420708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:29.364 [2024-10-27 11:34:14.420715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:29.364 [2024-10-27 11:34:14.420723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:29.364 [2024-10-27 11:34:14.420729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:29.364 [2024-10-27 11:34:14.420742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:29.364 [2024-10-27 11:34:14.420749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420755] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:29.364 [2024-10-27 11:34:14.420764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:29.364 [2024-10-27 11:34:14.420772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:29.364 [2024-10-27 11:34:14.420781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:29.364 [2024-10-27 11:34:14.420789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:29.364 [2024-10-27 11:34:14.420797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:29.364 [2024-10-27 11:34:14.420804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:29.364 [2024-10-27 11:34:14.420813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:29.364 [2024-10-27 11:34:14.420820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:29.364 [2024-10-27 11:34:14.420827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:29.364 [2024-10-27 11:34:14.420836] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:29.364 [2024-10-27 11:34:14.420845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.365 [2024-10-27 11:34:14.420854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:29.365 [2024-10-27 11:34:14.420862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:29.365 [2024-10-27 11:34:14.420869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:29.365 [2024-10-27 11:34:14.420877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:29.365 [2024-10-27 11:34:14.420884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:29.365 [2024-10-27 11:34:14.420891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:29.365 [2024-10-27 11:34:14.420898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:29.365 [2024-10-27 11:34:14.420905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:29.365 [2024-10-27 11:34:14.420912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:29.365 [2024-10-27 11:34:14.420920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:29.365 [2024-10-27 11:34:14.420927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:29.365 [2024-10-27 11:34:14.420934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:29.365 [2024-10-27 11:34:14.420941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:29.365 [2024-10-27 11:34:14.420948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:29.365 [2024-10-27 11:34:14.420956] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:29.365 [2024-10-27 11:34:14.420964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:29.365 [2024-10-27 11:34:14.420975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:29.365 [2024-10-27 11:34:14.420982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:29.365 [2024-10-27 11:34:14.420989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:29.365 [2024-10-27 11:34:14.420996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:29.365 [2024-10-27 11:34:14.421003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.421012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:29.365 [2024-10-27 11:34:14.421020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:19:29.365 [2024-10-27 11:34:14.421033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.452404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.452449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.365 [2024-10-27 11:34:14.452460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.326 ms 00:19:29.365 [2024-10-27 11:34:14.452468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.452556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.452570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:29.365 [2024-10-27 11:34:14.452579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:29.365 [2024-10-27 11:34:14.452587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.496613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.496685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:29.365 [2024-10-27 11:34:14.496700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.970 ms 00:19:29.365 [2024-10-27 11:34:14.496709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.496758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.496769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:29.365 [2024-10-27 11:34:14.496778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:29.365 [2024-10-27 11:34:14.496790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.497408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.497438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:29.365 [2024-10-27 11:34:14.497449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:19:29.365 [2024-10-27 11:34:14.497457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.497615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.497626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:29.365 [2024-10-27 11:34:14.497634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:29.365 [2024-10-27 11:34:14.497642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.512972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.515309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:29.365 [2024-10-27 11:34:14.515335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.304 ms 00:19:29.365 [2024-10-27 11:34:14.515352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.529615] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:29.365 [2024-10-27 11:34:14.529664] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:29.365 [2024-10-27 11:34:14.529677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.529687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:29.365 [2024-10-27 11:34:14.529696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.209 ms 00:19:29.365 [2024-10-27 11:34:14.529703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.555373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.555427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:29.365 [2024-10-27 11:34:14.555440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.618 ms 00:19:29.365 [2024-10-27 11:34:14.555448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.568337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.568376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:29.365 [2024-10-27 11:34:14.568388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.836 ms 00:19:29.365 [2024-10-27 11:34:14.568395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.581127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.581168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:29.365 [2024-10-27 11:34:14.581180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.688 ms 00:19:29.365 [2024-10-27 11:34:14.581187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.365 [2024-10-27 11:34:14.581876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.365 [2024-10-27 11:34:14.581906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:29.365 [2024-10-27 11:34:14.581917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:19:29.365 [2024-10-27 11:34:14.581926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.627 [2024-10-27 11:34:14.646024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.627 [2024-10-27 11:34:14.646082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:29.627 [2024-10-27 11:34:14.646097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.075 ms 00:19:29.627 [2024-10-27 11:34:14.646113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.627 [2024-10-27 11:34:14.657104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:29.627 [2024-10-27 11:34:14.660093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.627 [2024-10-27 11:34:14.660136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:29.627 [2024-10-27 11:34:14.660148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.923 ms 00:19:29.627 [2024-10-27 11:34:14.660157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.627 [2024-10-27 11:34:14.660238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.627 [2024-10-27 11:34:14.660250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:29.627 [2024-10-27 11:34:14.660260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:29.627 [2024-10-27 11:34:14.660268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.627 [2024-10-27 11:34:14.660382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.627 [2024-10-27 11:34:14.660394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:29.627 [2024-10-27 11:34:14.660402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:29.627 [2024-10-27 11:34:14.660411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.627 [2024-10-27 11:34:14.660436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.627 [2024-10-27 11:34:14.660445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:29.627 [2024-10-27 11:34:14.660453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:29.627 [2024-10-27 11:34:14.660461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.627 [2024-10-27 11:34:14.660496] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:29.627 [2024-10-27 11:34:14.660510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.627 [2024-10-27 11:34:14.660519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:29.627 [2024-10-27 11:34:14.660527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:29.627 [2024-10-27 11:34:14.660536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.627 [2024-10-27 11:34:14.685721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.628 [2024-10-27 11:34:14.685767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:29.628 [2024-10-27 11:34:14.685782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.162 ms 00:19:29.628 [2024-10-27 11:34:14.685790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.628 [2024-10-27 11:34:14.685883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.628 [2024-10-27 11:34:14.685893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:29.628 [2024-10-27 11:34:14.685903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:29.628 [2024-10-27 11:34:14.685912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.628 [2024-10-27 11:34:14.687144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 296.271 ms, result 0 00:19:31.009  [2024-10-27T11:34:17.231Z] Copying: 11/1024 [MB] (11 MBps) [2024-10-27T11:34:18.176Z] Copying: 22/1024 [MB] (10 MBps) [2024-10-27T11:34:19.120Z] Copying: 34/1024 [MB] (11 MBps) [2024-10-27T11:34:20.064Z] Copying: 55/1024 [MB] (21 MBps) [2024-10-27T11:34:21.037Z] Copying: 70/1024 [MB] (14 MBps) [2024-10-27T11:34:21.980Z] Copying: 81/1024 [MB] (11 MBps) [2024-10-27T11:34:22.924Z] Copying: 106/1024 [MB] (24 MBps) [2024-10-27T11:34:24.312Z] Copying: 122/1024 [MB] (15 MBps) [2024-10-27T11:34:24.887Z] Copying: 143/1024 [MB] (21 MBps) [2024-10-27T11:34:26.273Z] Copying: 158/1024 [MB] (14 MBps) [2024-10-27T11:34:27.260Z] Copying: 173/1024 [MB] (15 MBps) [2024-10-27T11:34:28.227Z] Copying: 192/1024 [MB] (18 MBps) [2024-10-27T11:34:29.172Z] Copying: 215/1024 [MB] (23 MBps) [2024-10-27T11:34:30.131Z] Copying: 237/1024 [MB] (21 MBps) [2024-10-27T11:34:31.074Z] Copying: 261/1024 [MB] (24 MBps) [2024-10-27T11:34:32.019Z] Copying: 286/1024 [MB] (25 MBps) [2024-10-27T11:34:32.959Z] Copying: 304/1024 [MB] (17 MBps) [2024-10-27T11:34:33.898Z] Copying: 331/1024 [MB] (27 MBps) [2024-10-27T11:34:35.282Z] Copying: 343/1024 [MB] (11 MBps) [2024-10-27T11:34:36.226Z] Copying: 354/1024 [MB] (10 MBps) [2024-10-27T11:34:37.169Z] Copying: 366/1024 [MB] (12 MBps) [2024-10-27T11:34:38.112Z] Copying: 379/1024 [MB] (13 MBps) [2024-10-27T11:34:39.057Z] Copying: 390/1024 [MB] (10 MBps) [2024-10-27T11:34:40.000Z] Copying: 402/1024 [MB] (12 MBps) [2024-10-27T11:34:40.944Z] Copying: 413/1024 [MB] (10 MBps) [2024-10-27T11:34:41.886Z] Copying: 429/1024 [MB] (16 MBps) [2024-10-27T11:34:43.272Z] Copying: 442/1024 [MB] (12 MBps) [2024-10-27T11:34:44.215Z] Copying: 453/1024 [MB] (10 MBps) [2024-10-27T11:34:45.158Z] Copying: 463/1024 [MB] (10 MBps) [2024-10-27T11:34:46.103Z] Copying: 478/1024 [MB] (14 MBps) [2024-10-27T11:34:47.047Z] Copying: 498/1024 [MB] (20 MBps) [2024-10-27T11:34:47.991Z] Copying: 516/1024 [MB] (17 MBps) [2024-10-27T11:34:48.933Z] Copying: 529/1024 [MB] (13 MBps) [2024-10-27T11:34:49.874Z] Copying: 540/1024 [MB] (10 MBps) [2024-10-27T11:34:51.260Z] Copying: 551/1024 [MB] (10 MBps) [2024-10-27T11:34:52.205Z] Copying: 561/1024 [MB] (10 MBps) [2024-10-27T11:34:53.148Z] Copying: 571/1024 [MB] (10 MBps) [2024-10-27T11:34:54.092Z] Copying: 582/1024 [MB] (10 MBps) [2024-10-27T11:34:55.036Z] Copying: 593/1024 [MB] (10 MBps) [2024-10-27T11:34:55.980Z] Copying: 604/1024 [MB] (10 MBps) [2024-10-27T11:34:56.924Z] Copying: 615/1024 [MB] (10 MBps) [2024-10-27T11:34:58.309Z] Copying: 631/1024 [MB] (16 MBps) [2024-10-27T11:34:58.915Z] Copying: 644/1024 [MB] (12 MBps) [2024-10-27T11:34:59.896Z] Copying: 667/1024 [MB] (23 MBps) [2024-10-27T11:35:01.283Z] Copying: 683/1024 [MB] (15 MBps) [2024-10-27T11:35:02.227Z] Copying: 695/1024 [MB] (11 MBps) [2024-10-27T11:35:03.171Z] Copying: 709/1024 [MB] (14 MBps) [2024-10-27T11:35:04.115Z] Copying: 728/1024 [MB] (18 MBps) [2024-10-27T11:35:05.059Z] Copying: 747/1024 [MB] (19 MBps) [2024-10-27T11:35:06.002Z] Copying: 758/1024 [MB] (10 MBps) [2024-10-27T11:35:06.945Z] Copying: 769/1024 [MB] (10 MBps) [2024-10-27T11:35:07.888Z] Copying: 780/1024 [MB] (11 MBps) [2024-10-27T11:35:09.275Z] Copying: 792/1024 [MB] (11 MBps) [2024-10-27T11:35:10.218Z] Copying: 809/1024 [MB] (17 MBps) [2024-10-27T11:35:11.163Z] Copying: 820/1024 [MB] (11 MBps) [2024-10-27T11:35:12.109Z] Copying: 831/1024 [MB] (11 MBps) [2024-10-27T11:35:13.052Z] Copying: 843/1024 [MB] (12 MBps) [2024-10-27T11:35:13.991Z] Copying: 858/1024 [MB] (14 MBps) [2024-10-27T11:35:14.931Z] Copying: 874/1024 [MB] (16 MBps) [2024-10-27T11:35:15.873Z] Copying: 894/1024 [MB] (19 MBps) [2024-10-27T11:35:17.261Z] Copying: 916/1024 [MB] (22 MBps) [2024-10-27T11:35:18.206Z] Copying: 930/1024 [MB] (13 MBps) [2024-10-27T11:35:19.151Z] Copying: 951/1024 [MB] (20 MBps) [2024-10-27T11:35:20.096Z] Copying: 970/1024 [MB] (18 MBps) [2024-10-27T11:35:21.041Z] Copying: 985/1024 [MB] (14 MBps) [2024-10-27T11:35:21.614Z] Copying: 1008/1024 [MB] (23 MBps) [2024-10-27T11:35:21.878Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-10-27 11:35:21.643436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.597 [2024-10-27 11:35:21.643562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:36.597 [2024-10-27 11:35:21.643601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:36.597 [2024-10-27 11:35:21.643626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.597 [2024-10-27 11:35:21.643685] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:36.597 [2024-10-27 11:35:21.649872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.597 [2024-10-27 11:35:21.649916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:36.597 [2024-10-27 11:35:21.649927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.148 ms 00:20:36.597 [2024-10-27 11:35:21.649943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.597 [2024-10-27 11:35:21.650163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.597 [2024-10-27 11:35:21.650174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:36.597 [2024-10-27 11:35:21.650183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:20:36.597 [2024-10-27 11:35:21.650191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.597 [2024-10-27 11:35:21.653647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.597 [2024-10-27 11:35:21.653680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:36.597 [2024-10-27 11:35:21.653689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.442 ms 00:20:36.597 [2024-10-27 11:35:21.653697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.597 [2024-10-27 11:35:21.659912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.597 [2024-10-27 11:35:21.660115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:36.597 [2024-10-27 11:35:21.660137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.194 ms 00:20:36.597 [2024-10-27 11:35:21.660145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.597 [2024-10-27 11:35:21.686603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.597 [2024-10-27 11:35:21.686792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:36.597 [2024-10-27 11:35:21.686814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.389 ms 00:20:36.597 [2024-10-27 11:35:21.686822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.597 [2024-10-27 11:35:21.703620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.597 [2024-10-27 11:35:21.703677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:36.597 [2024-10-27 11:35:21.703694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.503 ms 00:20:36.598 [2024-10-27 11:35:21.703703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.598 [2024-10-27 11:35:21.703854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.598 [2024-10-27 11:35:21.703866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:36.598 [2024-10-27 11:35:21.703883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:36.598 [2024-10-27 11:35:21.703891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.598 [2024-10-27 11:35:21.730156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.598 [2024-10-27 11:35:21.730376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:36.598 [2024-10-27 11:35:21.730398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.249 ms 00:20:36.598 [2024-10-27 11:35:21.730405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.598 [2024-10-27 11:35:21.756220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.598 [2024-10-27 11:35:21.756280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:36.598 [2024-10-27 11:35:21.756309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.715 ms 00:20:36.598 [2024-10-27 11:35:21.756318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.598 [2024-10-27 11:35:21.781468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.598 [2024-10-27 11:35:21.781648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:36.598 [2024-10-27 11:35:21.781669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.103 ms 00:20:36.598 [2024-10-27 11:35:21.781676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.598 [2024-10-27 11:35:21.807223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.598 [2024-10-27 11:35:21.807270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:36.598 [2024-10-27 11:35:21.807282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.371 ms 00:20:36.598 [2024-10-27 11:35:21.807290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.598 [2024-10-27 11:35:21.807361] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:36.598 [2024-10-27 11:35:21.807379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:36.598 [2024-10-27 11:35:21.807953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.807961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.807968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.807976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.807983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.807991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.807998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:36.599 [2024-10-27 11:35:21.808169] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:36.599 [2024-10-27 11:35:21.808178] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a78a63f-1220-4fa3-ad2e-cf0b6390ae8b 00:20:36.599 [2024-10-27 11:35:21.808190] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:36.599 [2024-10-27 11:35:21.808197] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:36.599 [2024-10-27 11:35:21.808204] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:36.599 [2024-10-27 11:35:21.808212] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:36.599 [2024-10-27 11:35:21.808220] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:36.599 [2024-10-27 11:35:21.808228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:36.599 [2024-10-27 11:35:21.808244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:36.599 [2024-10-27 11:35:21.808250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:36.599 [2024-10-27 11:35:21.808257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:36.599 [2024-10-27 11:35:21.808265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.599 [2024-10-27 11:35:21.808273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:36.599 [2024-10-27 11:35:21.808282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:20:36.599 [2024-10-27 11:35:21.808290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.599 [2024-10-27 11:35:21.822242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.599 [2024-10-27 11:35:21.822286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:36.599 [2024-10-27 11:35:21.822316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.921 ms 00:20:36.599 [2024-10-27 11:35:21.822324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.599 [2024-10-27 11:35:21.822723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.599 [2024-10-27 11:35:21.822742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:36.599 [2024-10-27 11:35:21.822751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:20:36.599 [2024-10-27 11:35:21.822759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.599 [2024-10-27 11:35:21.861074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.599 [2024-10-27 11:35:21.861277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.599 [2024-10-27 11:35:21.861321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.599 [2024-10-27 11:35:21.861332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.599 [2024-10-27 11:35:21.861406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.599 [2024-10-27 11:35:21.861431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.599 [2024-10-27 11:35:21.861441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.599 [2024-10-27 11:35:21.861450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.599 [2024-10-27 11:35:21.861552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.599 [2024-10-27 11:35:21.861563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.599 [2024-10-27 11:35:21.861572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.599 [2024-10-27 11:35:21.861579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.599 [2024-10-27 11:35:21.861595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.599 [2024-10-27 11:35:21.861604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.599 [2024-10-27 11:35:21.861612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.599 [2024-10-27 11:35:21.861621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.861 [2024-10-27 11:35:21.945443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.861 [2024-10-27 11:35:21.945496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.861 [2024-10-27 11:35:21.945509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.861 [2024-10-27 11:35:21.945518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.861 [2024-10-27 11:35:22.014572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.861 [2024-10-27 11:35:22.014629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.861 [2024-10-27 11:35:22.014642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.861 [2024-10-27 11:35:22.014651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.861 [2024-10-27 11:35:22.014722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.861 [2024-10-27 11:35:22.014732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.861 [2024-10-27 11:35:22.014741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.861 [2024-10-27 11:35:22.014750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.861 [2024-10-27 11:35:22.014807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.861 [2024-10-27 11:35:22.014818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.861 [2024-10-27 11:35:22.014827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.861 [2024-10-27 11:35:22.014835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.861 [2024-10-27 11:35:22.014931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.861 [2024-10-27 11:35:22.014946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.861 [2024-10-27 11:35:22.014955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.861 [2024-10-27 11:35:22.014964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.861 [2024-10-27 11:35:22.014997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.861 [2024-10-27 11:35:22.015007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:36.861 [2024-10-27 11:35:22.015016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.861 [2024-10-27 11:35:22.015024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.861 [2024-10-27 11:35:22.015067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.861 [2024-10-27 11:35:22.015080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.861 [2024-10-27 11:35:22.015088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.861 [2024-10-27 11:35:22.015097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.861 [2024-10-27 11:35:22.015145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:36.861 [2024-10-27 11:35:22.015155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.861 [2024-10-27 11:35:22.015164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:36.861 [2024-10-27 11:35:22.015173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.861 [2024-10-27 11:35:22.015345] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.870 ms, result 0 00:20:37.804 00:20:37.804 00:20:37.804 11:35:22 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:39.719 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:39.719 11:35:24 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:39.980 [2024-10-27 11:35:25.055988] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:20:39.980 [2024-10-27 11:35:25.056125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75969 ] 00:20:39.980 [2024-10-27 11:35:25.219916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.242 [2024-10-27 11:35:25.335248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.503 [2024-10-27 11:35:25.621614] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.503 [2024-10-27 11:35:25.621948] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.766 [2024-10-27 11:35:25.783985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-10-27 11:35:25.784201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.766 [2024-10-27 11:35:25.784230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:40.766 [2024-10-27 11:35:25.784240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-10-27 11:35:25.784331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-10-27 11:35:25.784344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.766 [2024-10-27 11:35:25.784356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:40.766 [2024-10-27 11:35:25.784364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-10-27 11:35:25.784388] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.766 [2024-10-27 11:35:25.785119] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.766 [2024-10-27 11:35:25.785147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-10-27 11:35:25.785157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.766 [2024-10-27 11:35:25.785166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:20:40.766 [2024-10-27 11:35:25.785175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-10-27 11:35:25.787051] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.766 [2024-10-27 11:35:25.801605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-10-27 11:35:25.801781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.766 [2024-10-27 11:35:25.801803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.556 ms 00:20:40.766 [2024-10-27 11:35:25.801813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-10-27 11:35:25.801965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-10-27 11:35:25.801995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.766 [2024-10-27 11:35:25.802006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:40.766 [2024-10-27 11:35:25.802014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-10-27 11:35:25.810364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-10-27 11:35:25.810525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.766 [2024-10-27 11:35:25.810542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.269 ms 00:20:40.766 [2024-10-27 11:35:25.810552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-10-27 11:35:25.810639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-10-27 11:35:25.810649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.766 [2024-10-27 11:35:25.810658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:40.766 [2024-10-27 11:35:25.810666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.766 [2024-10-27 11:35:25.810711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.766 [2024-10-27 11:35:25.810721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.767 [2024-10-27 11:35:25.810730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:40.767 [2024-10-27 11:35:25.810737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.767 [2024-10-27 11:35:25.810763] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.767 [2024-10-27 11:35:25.814768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.767 [2024-10-27 11:35:25.814823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.767 [2024-10-27 11:35:25.814834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.012 ms 00:20:40.767 [2024-10-27 11:35:25.814845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.767 [2024-10-27 11:35:25.814880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.767 [2024-10-27 11:35:25.814889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.767 [2024-10-27 11:35:25.814898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:40.767 [2024-10-27 11:35:25.814906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.767 [2024-10-27 11:35:25.814959] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.767 [2024-10-27 11:35:25.814981] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:40.767 [2024-10-27 11:35:25.815018] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.767 [2024-10-27 11:35:25.815038] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:40.767 [2024-10-27 11:35:25.815145] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:40.767 [2024-10-27 11:35:25.815157] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.767 [2024-10-27 11:35:25.815168] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:40.767 [2024-10-27 11:35:25.815179] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.767 [2024-10-27 11:35:25.815188] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.767 [2024-10-27 11:35:25.815196] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:40.767 [2024-10-27 11:35:25.815204] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.767 [2024-10-27 11:35:25.815213] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:40.767 [2024-10-27 11:35:25.815220] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:40.767 [2024-10-27 11:35:25.815232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.767 [2024-10-27 11:35:25.815240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.767 [2024-10-27 11:35:25.815248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:20:40.767 [2024-10-27 11:35:25.815255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.767 [2024-10-27 11:35:25.815362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.767 [2024-10-27 11:35:25.815372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.767 [2024-10-27 11:35:25.815381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:40.767 [2024-10-27 11:35:25.815389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.767 [2024-10-27 11:35:25.815496] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.767 [2024-10-27 11:35:25.815510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.767 [2024-10-27 11:35:25.815520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.767 [2024-10-27 11:35:25.815529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.767 [2024-10-27 11:35:25.815543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:40.767 [2024-10-27 11:35:25.815558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.767 [2024-10-27 11:35:25.815566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.767 [2024-10-27 11:35:25.815581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.767 [2024-10-27 11:35:25.815588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:40.767 [2024-10-27 11:35:25.815596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.767 [2024-10-27 11:35:25.815604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.767 [2024-10-27 11:35:25.815611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:40.767 [2024-10-27 11:35:25.815625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.767 [2024-10-27 11:35:25.815639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:40.767 [2024-10-27 11:35:25.815646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.767 [2024-10-27 11:35:25.815660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.767 [2024-10-27 11:35:25.815674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.767 [2024-10-27 11:35:25.815681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.767 [2024-10-27 11:35:25.815696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.767 [2024-10-27 11:35:25.815703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.767 [2024-10-27 11:35:25.815716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.767 [2024-10-27 11:35:25.815723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.767 [2024-10-27 11:35:25.815736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.767 [2024-10-27 11:35:25.815743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.767 [2024-10-27 11:35:25.815757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.767 [2024-10-27 11:35:25.815764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:40.767 [2024-10-27 11:35:25.815770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.767 [2024-10-27 11:35:25.815777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:40.767 [2024-10-27 11:35:25.815783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:40.767 [2024-10-27 11:35:25.815790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:40.767 [2024-10-27 11:35:25.815803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:40.767 [2024-10-27 11:35:25.815809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815815] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.767 [2024-10-27 11:35:25.815826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.767 [2024-10-27 11:35:25.815834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.767 [2024-10-27 11:35:25.815842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.767 [2024-10-27 11:35:25.815849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.767 [2024-10-27 11:35:25.815856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.767 [2024-10-27 11:35:25.815863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.767 [2024-10-27 11:35:25.815869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.768 [2024-10-27 11:35:25.815876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.768 [2024-10-27 11:35:25.815882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.768 [2024-10-27 11:35:25.815891] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.768 [2024-10-27 11:35:25.815900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.768 [2024-10-27 11:35:25.815910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:40.768 [2024-10-27 11:35:25.815917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:40.768 [2024-10-27 11:35:25.815924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:40.768 [2024-10-27 11:35:25.815931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:40.768 [2024-10-27 11:35:25.815939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:40.768 [2024-10-27 11:35:25.815946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:40.768 [2024-10-27 11:35:25.815953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:40.768 [2024-10-27 11:35:25.815960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:40.768 [2024-10-27 11:35:25.815967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:40.768 [2024-10-27 11:35:25.815974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:40.768 [2024-10-27 11:35:25.815981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:40.768 [2024-10-27 11:35:25.815988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:40.768 [2024-10-27 11:35:25.815996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:40.768 [2024-10-27 11:35:25.816003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:40.768 [2024-10-27 11:35:25.816009] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.768 [2024-10-27 11:35:25.816018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.768 [2024-10-27 11:35:25.816028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.768 [2024-10-27 11:35:25.816036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.768 [2024-10-27 11:35:25.816043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.768 [2024-10-27 11:35:25.816050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.768 [2024-10-27 11:35:25.816058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.816069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.768 [2024-10-27 11:35:25.816078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:20:40.768 [2024-10-27 11:35:25.816086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.848122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.848173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.768 [2024-10-27 11:35:25.848186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.990 ms 00:20:40.768 [2024-10-27 11:35:25.848194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.848287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.848320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.768 [2024-10-27 11:35:25.848330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:40.768 [2024-10-27 11:35:25.848338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.891933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.891985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.768 [2024-10-27 11:35:25.891999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.535 ms 00:20:40.768 [2024-10-27 11:35:25.892008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.892057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.892067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.768 [2024-10-27 11:35:25.892077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:40.768 [2024-10-27 11:35:25.892089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.892692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.892714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.768 [2024-10-27 11:35:25.892740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:20:40.768 [2024-10-27 11:35:25.892749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.892898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.892908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.768 [2024-10-27 11:35:25.892917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:20:40.768 [2024-10-27 11:35:25.892925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.908352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.908395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.768 [2024-10-27 11:35:25.908407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.400 ms 00:20:40.768 [2024-10-27 11:35:25.908418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.922609] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:40.768 [2024-10-27 11:35:25.922660] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:40.768 [2024-10-27 11:35:25.922674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.922682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:40.768 [2024-10-27 11:35:25.922691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.146 ms 00:20:40.768 [2024-10-27 11:35:25.922698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.948454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.948510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:40.768 [2024-10-27 11:35:25.948523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.701 ms 00:20:40.768 [2024-10-27 11:35:25.948531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.961331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.961513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:40.768 [2024-10-27 11:35:25.961533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.744 ms 00:20:40.768 [2024-10-27 11:35:25.961541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.973917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.973962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:40.768 [2024-10-27 11:35:25.973974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.337 ms 00:20:40.768 [2024-10-27 11:35:25.973982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:25.974655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:25.974679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.768 [2024-10-27 11:35:25.974690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:20:40.768 [2024-10-27 11:35:25.974699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.768 [2024-10-27 11:35:26.037738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.768 [2024-10-27 11:35:26.037800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:40.768 [2024-10-27 11:35:26.037816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.016 ms 00:20:40.768 [2024-10-27 11:35:26.037831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.030 [2024-10-27 11:35:26.048929] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:41.030 [2024-10-27 11:35:26.051920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.030 [2024-10-27 11:35:26.051963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:41.030 [2024-10-27 11:35:26.051975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.033 ms 00:20:41.030 [2024-10-27 11:35:26.051983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.030 [2024-10-27 11:35:26.052066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.030 [2024-10-27 11:35:26.052078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:41.030 [2024-10-27 11:35:26.052088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:41.030 [2024-10-27 11:35:26.052096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.030 [2024-10-27 11:35:26.052170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.030 [2024-10-27 11:35:26.052182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:41.030 [2024-10-27 11:35:26.052190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:41.030 [2024-10-27 11:35:26.052199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.030 [2024-10-27 11:35:26.052220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.030 [2024-10-27 11:35:26.052229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:41.030 [2024-10-27 11:35:26.052238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:41.030 [2024-10-27 11:35:26.052246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.030 [2024-10-27 11:35:26.052282] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:41.030 [2024-10-27 11:35:26.052462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.030 [2024-10-27 11:35:26.052498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:41.030 [2024-10-27 11:35:26.052520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:20:41.030 [2024-10-27 11:35:26.052541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.030 [2024-10-27 11:35:26.078209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.030 [2024-10-27 11:35:26.078406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:41.030 [2024-10-27 11:35:26.078428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.628 ms 00:20:41.031 [2024-10-27 11:35:26.078437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.031 [2024-10-27 11:35:26.078526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.031 [2024-10-27 11:35:26.078537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:41.031 [2024-10-27 11:35:26.078546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:41.031 [2024-10-27 11:35:26.078554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.031 [2024-10-27 11:35:26.079989] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.500 ms, result 0 00:20:41.975  [2024-10-27T11:35:28.199Z] Copying: 10/1024 [MB] (10 MBps) [2024-10-27T11:35:29.142Z] Copying: 48/1024 [MB] (38 MBps) [2024-10-27T11:35:30.577Z] Copying: 59/1024 [MB] (10 MBps) [2024-10-27T11:35:31.198Z] Copying: 71/1024 [MB] (11 MBps) [2024-10-27T11:35:32.141Z] Copying: 87/1024 [MB] (16 MBps) [2024-10-27T11:35:33.527Z] Copying: 103/1024 [MB] (15 MBps) [2024-10-27T11:35:34.100Z] Copying: 119/1024 [MB] (16 MBps) [2024-10-27T11:35:35.485Z] Copying: 133/1024 [MB] (14 MBps) [2024-10-27T11:35:36.426Z] Copying: 153/1024 [MB] (19 MBps) [2024-10-27T11:35:37.371Z] Copying: 168/1024 [MB] (15 MBps) [2024-10-27T11:35:38.314Z] Copying: 188/1024 [MB] (20 MBps) [2024-10-27T11:35:39.258Z] Copying: 211/1024 [MB] (22 MBps) [2024-10-27T11:35:40.202Z] Copying: 232/1024 [MB] (21 MBps) [2024-10-27T11:35:41.147Z] Copying: 254/1024 [MB] (22 MBps) [2024-10-27T11:35:42.534Z] Copying: 274/1024 [MB] (20 MBps) [2024-10-27T11:35:43.107Z] Copying: 295/1024 [MB] (20 MBps) [2024-10-27T11:35:44.497Z] Copying: 312/1024 [MB] (17 MBps) [2024-10-27T11:35:45.440Z] Copying: 329/1024 [MB] (17 MBps) [2024-10-27T11:35:46.384Z] Copying: 342/1024 [MB] (12 MBps) [2024-10-27T11:35:47.325Z] Copying: 358/1024 [MB] (16 MBps) [2024-10-27T11:35:48.272Z] Copying: 380/1024 [MB] (21 MBps) [2024-10-27T11:35:49.216Z] Copying: 391/1024 [MB] (11 MBps) [2024-10-27T11:35:50.158Z] Copying: 401/1024 [MB] (10 MBps) [2024-10-27T11:35:51.102Z] Copying: 413/1024 [MB] (11 MBps) [2024-10-27T11:35:52.490Z] Copying: 423/1024 [MB] (10 MBps) [2024-10-27T11:35:53.444Z] Copying: 433/1024 [MB] (10 MBps) [2024-10-27T11:35:54.386Z] Copying: 443/1024 [MB] (10 MBps) [2024-10-27T11:35:55.330Z] Copying: 454/1024 [MB] (10 MBps) [2024-10-27T11:35:56.274Z] Copying: 470/1024 [MB] (16 MBps) [2024-10-27T11:35:57.219Z] Copying: 497/1024 [MB] (26 MBps) [2024-10-27T11:35:58.162Z] Copying: 508/1024 [MB] (10 MBps) [2024-10-27T11:35:59.106Z] Copying: 519/1024 [MB] (11 MBps) [2024-10-27T11:36:00.493Z] Copying: 530/1024 [MB] (10 MBps) [2024-10-27T11:36:01.436Z] Copying: 540/1024 [MB] (10 MBps) [2024-10-27T11:36:02.455Z] Copying: 564/1024 [MB] (23 MBps) [2024-10-27T11:36:03.422Z] Copying: 586/1024 [MB] (21 MBps) [2024-10-27T11:36:04.367Z] Copying: 601/1024 [MB] (15 MBps) [2024-10-27T11:36:05.310Z] Copying: 617/1024 [MB] (15 MBps) [2024-10-27T11:36:06.254Z] Copying: 635/1024 [MB] (17 MBps) [2024-10-27T11:36:07.198Z] Copying: 648/1024 [MB] (12 MBps) [2024-10-27T11:36:08.143Z] Copying: 662/1024 [MB] (14 MBps) [2024-10-27T11:36:09.101Z] Copying: 683/1024 [MB] (21 MBps) [2024-10-27T11:36:10.488Z] Copying: 705/1024 [MB] (21 MBps) [2024-10-27T11:36:11.432Z] Copying: 722/1024 [MB] (16 MBps) [2024-10-27T11:36:12.376Z] Copying: 738/1024 [MB] (16 MBps) [2024-10-27T11:36:13.320Z] Copying: 751/1024 [MB] (13 MBps) [2024-10-27T11:36:14.263Z] Copying: 770/1024 [MB] (18 MBps) [2024-10-27T11:36:15.207Z] Copying: 798/1024 [MB] (28 MBps) [2024-10-27T11:36:16.149Z] Copying: 820/1024 [MB] (22 MBps) [2024-10-27T11:36:17.535Z] Copying: 835/1024 [MB] (14 MBps) [2024-10-27T11:36:18.106Z] Copying: 845/1024 [MB] (10 MBps) [2024-10-27T11:36:19.491Z] Copying: 855/1024 [MB] (10 MBps) [2024-10-27T11:36:20.434Z] Copying: 874/1024 [MB] (18 MBps) [2024-10-27T11:36:21.378Z] Copying: 885/1024 [MB] (11 MBps) [2024-10-27T11:36:22.321Z] Copying: 906/1024 [MB] (20 MBps) [2024-10-27T11:36:23.268Z] Copying: 921/1024 [MB] (14 MBps) [2024-10-27T11:36:24.215Z] Copying: 932/1024 [MB] (11 MBps) [2024-10-27T11:36:25.159Z] Copying: 950/1024 [MB] (17 MBps) [2024-10-27T11:36:26.102Z] Copying: 969/1024 [MB] (18 MBps) [2024-10-27T11:36:27.488Z] Copying: 987/1024 [MB] (17 MBps) [2024-10-27T11:36:28.426Z] Copying: 1004/1024 [MB] (17 MBps) [2024-10-27T11:36:29.365Z] Copying: 1014/1024 [MB] (10 MBps) [2024-10-27T11:36:29.938Z] Copying: 1047880/1048576 [kB] (8832 kBps) [2024-10-27T11:36:29.938Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-27 11:36:29.789712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.657 [2024-10-27 11:36:29.789797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:44.657 [2024-10-27 11:36:29.789815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:44.657 [2024-10-27 11:36:29.789826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.657 [2024-10-27 11:36:29.790812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:44.657 [2024-10-27 11:36:29.795762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.657 [2024-10-27 11:36:29.795810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:44.657 [2024-10-27 11:36:29.795823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.919 ms 00:21:44.657 [2024-10-27 11:36:29.795833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.657 [2024-10-27 11:36:29.808841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.657 [2024-10-27 11:36:29.808901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:44.657 [2024-10-27 11:36:29.808915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.424 ms 00:21:44.657 [2024-10-27 11:36:29.808924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.657 [2024-10-27 11:36:29.834008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.657 [2024-10-27 11:36:29.834064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:44.657 [2024-10-27 11:36:29.834079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.054 ms 00:21:44.657 [2024-10-27 11:36:29.834089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.657 [2024-10-27 11:36:29.840289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.657 [2024-10-27 11:36:29.840334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:44.657 [2024-10-27 11:36:29.840348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.160 ms 00:21:44.657 [2024-10-27 11:36:29.840356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.657 [2024-10-27 11:36:29.867143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.657 [2024-10-27 11:36:29.867190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:44.657 [2024-10-27 11:36:29.867203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.733 ms 00:21:44.657 [2024-10-27 11:36:29.867211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.657 [2024-10-27 11:36:29.882955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.657 [2024-10-27 11:36:29.883002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:44.657 [2024-10-27 11:36:29.883023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.696 ms 00:21:44.657 [2024-10-27 11:36:29.883032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.919 [2024-10-27 11:36:30.053897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.919 [2024-10-27 11:36:30.053974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:44.919 [2024-10-27 11:36:30.053992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 170.808 ms 00:21:44.919 [2024-10-27 11:36:30.054002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.919 [2024-10-27 11:36:30.081174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.919 [2024-10-27 11:36:30.081431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:44.919 [2024-10-27 11:36:30.081456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.154 ms 00:21:44.919 [2024-10-27 11:36:30.081466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.919 [2024-10-27 11:36:30.106794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.919 [2024-10-27 11:36:30.106855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:44.919 [2024-10-27 11:36:30.106868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.283 ms 00:21:44.919 [2024-10-27 11:36:30.106876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.919 [2024-10-27 11:36:30.131944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.919 [2024-10-27 11:36:30.131991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:44.919 [2024-10-27 11:36:30.132004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.019 ms 00:21:44.919 [2024-10-27 11:36:30.132012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.919 [2024-10-27 11:36:30.156780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.919 [2024-10-27 11:36:30.156848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:44.919 [2024-10-27 11:36:30.156862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.691 ms 00:21:44.919 [2024-10-27 11:36:30.156869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.919 [2024-10-27 11:36:30.156915] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:44.919 [2024-10-27 11:36:30.156932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101376 / 261120 wr_cnt: 1 state: open 00:21:44.919 [2024-10-27 11:36:30.156943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:44.919 [2024-10-27 11:36:30.156952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.156960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.156968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.156977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.156985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.156993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:44.920 [2024-10-27 11:36:30.157599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:44.921 [2024-10-27 11:36:30.157770] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:44.921 [2024-10-27 11:36:30.157778] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a78a63f-1220-4fa3-ad2e-cf0b6390ae8b 00:21:44.921 [2024-10-27 11:36:30.157786] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101376 00:21:44.921 [2024-10-27 11:36:30.157794] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102336 00:21:44.921 [2024-10-27 11:36:30.157802] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101376 00:21:44.921 [2024-10-27 11:36:30.157811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:21:44.921 [2024-10-27 11:36:30.157819] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:44.921 [2024-10-27 11:36:30.157827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:44.921 [2024-10-27 11:36:30.157847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:44.921 [2024-10-27 11:36:30.157855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:44.921 [2024-10-27 11:36:30.157862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:44.921 [2024-10-27 11:36:30.157870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.921 [2024-10-27 11:36:30.157878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:44.921 [2024-10-27 11:36:30.157888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:21:44.921 [2024-10-27 11:36:30.157896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.921 [2024-10-27 11:36:30.171788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.921 [2024-10-27 11:36:30.171971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:44.921 [2024-10-27 11:36:30.171991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.872 ms 00:21:44.921 [2024-10-27 11:36:30.171999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.921 [2024-10-27 11:36:30.172444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.921 [2024-10-27 11:36:30.172464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:44.921 [2024-10-27 11:36:30.172475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:21:44.921 [2024-10-27 11:36:30.172484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.209142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.209194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:45.183 [2024-10-27 11:36:30.209212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.209222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.209321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.209331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:45.183 [2024-10-27 11:36:30.209342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.209351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.209422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.209434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:45.183 [2024-10-27 11:36:30.209444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.209459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.209477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.209486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:45.183 [2024-10-27 11:36:30.209496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.209505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.294338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.294590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:45.183 [2024-10-27 11:36:30.294615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.294631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.364951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.365005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:45.183 [2024-10-27 11:36:30.365017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.365026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.365115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.365128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:45.183 [2024-10-27 11:36:30.365137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.365146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.365193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.365203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:45.183 [2024-10-27 11:36:30.365212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.365220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.365347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.365359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:45.183 [2024-10-27 11:36:30.365368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.365377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.365410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.365425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:45.183 [2024-10-27 11:36:30.365435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.365443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.365484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.365493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:45.183 [2024-10-27 11:36:30.365502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.365511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.365562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.183 [2024-10-27 11:36:30.365574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:45.183 [2024-10-27 11:36:30.365583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.183 [2024-10-27 11:36:30.365591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.183 [2024-10-27 11:36:30.365723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 578.934 ms, result 0 00:21:46.570 00:21:46.570 00:21:46.570 11:36:31 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:46.570 [2024-10-27 11:36:31.620910] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:21:46.570 [2024-10-27 11:36:31.621057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76653 ] 00:21:46.570 [2024-10-27 11:36:31.784378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.831 [2024-10-27 11:36:31.902412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.091 [2024-10-27 11:36:32.192985] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:47.091 [2024-10-27 11:36:32.193062] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:47.091 [2024-10-27 11:36:32.354849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.091 [2024-10-27 11:36:32.354914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:47.091 [2024-10-27 11:36:32.354935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:47.091 [2024-10-27 11:36:32.354944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.091 [2024-10-27 11:36:32.355000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.091 [2024-10-27 11:36:32.355012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:47.091 [2024-10-27 11:36:32.355023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:47.091 [2024-10-27 11:36:32.355031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.091 [2024-10-27 11:36:32.355051] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:47.091 [2024-10-27 11:36:32.355951] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:47.091 [2024-10-27 11:36:32.355998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.091 [2024-10-27 11:36:32.356008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:47.091 [2024-10-27 11:36:32.356018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:21:47.091 [2024-10-27 11:36:32.356026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.091 [2024-10-27 11:36:32.358251] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:47.353 [2024-10-27 11:36:32.372622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.353 [2024-10-27 11:36:32.372679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:47.353 [2024-10-27 11:36:32.372695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.374 ms 00:21:47.353 [2024-10-27 11:36:32.372704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.353 [2024-10-27 11:36:32.372781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.353 [2024-10-27 11:36:32.372795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:47.353 [2024-10-27 11:36:32.372818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:47.353 [2024-10-27 11:36:32.372826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.353 [2024-10-27 11:36:32.381162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.353 [2024-10-27 11:36:32.381384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:47.353 [2024-10-27 11:36:32.381405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.254 ms 00:21:47.353 [2024-10-27 11:36:32.381414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.353 [2024-10-27 11:36:32.381504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.353 [2024-10-27 11:36:32.381513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:47.353 [2024-10-27 11:36:32.381523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:47.353 [2024-10-27 11:36:32.381531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.353 [2024-10-27 11:36:32.381576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.353 [2024-10-27 11:36:32.381587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:47.353 [2024-10-27 11:36:32.381595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:47.353 [2024-10-27 11:36:32.381604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.353 [2024-10-27 11:36:32.381630] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:47.353 [2024-10-27 11:36:32.385657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.353 [2024-10-27 11:36:32.385695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:47.353 [2024-10-27 11:36:32.385706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.034 ms 00:21:47.353 [2024-10-27 11:36:32.385718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.353 [2024-10-27 11:36:32.385753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.353 [2024-10-27 11:36:32.385763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:47.353 [2024-10-27 11:36:32.385771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:47.353 [2024-10-27 11:36:32.385779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.353 [2024-10-27 11:36:32.385832] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:47.353 [2024-10-27 11:36:32.385855] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:47.353 [2024-10-27 11:36:32.385893] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:47.353 [2024-10-27 11:36:32.385911] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:47.353 [2024-10-27 11:36:32.386019] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:47.353 [2024-10-27 11:36:32.386031] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:47.353 [2024-10-27 11:36:32.386042] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:47.353 [2024-10-27 11:36:32.386054] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:47.353 [2024-10-27 11:36:32.386069] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:47.353 [2024-10-27 11:36:32.386079] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:47.353 [2024-10-27 11:36:32.386087] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:47.354 [2024-10-27 11:36:32.386095] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:47.354 [2024-10-27 11:36:32.386103] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:47.354 [2024-10-27 11:36:32.386115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.354 [2024-10-27 11:36:32.386123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:47.354 [2024-10-27 11:36:32.386131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:21:47.354 [2024-10-27 11:36:32.386139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.354 [2024-10-27 11:36:32.386222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.354 [2024-10-27 11:36:32.386231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:47.354 [2024-10-27 11:36:32.386239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:47.354 [2024-10-27 11:36:32.386247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.354 [2024-10-27 11:36:32.386376] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:47.354 [2024-10-27 11:36:32.386391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:47.354 [2024-10-27 11:36:32.386399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:47.354 [2024-10-27 11:36:32.386408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:47.354 [2024-10-27 11:36:32.386424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:47.354 [2024-10-27 11:36:32.386439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:47.354 [2024-10-27 11:36:32.386446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:47.354 [2024-10-27 11:36:32.386461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:47.354 [2024-10-27 11:36:32.386468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:47.354 [2024-10-27 11:36:32.386475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:47.354 [2024-10-27 11:36:32.386481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:47.354 [2024-10-27 11:36:32.386488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:47.354 [2024-10-27 11:36:32.386501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:47.354 [2024-10-27 11:36:32.386515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:47.354 [2024-10-27 11:36:32.386522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:47.354 [2024-10-27 11:36:32.386536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.354 [2024-10-27 11:36:32.386550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:47.354 [2024-10-27 11:36:32.386557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.354 [2024-10-27 11:36:32.386570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:47.354 [2024-10-27 11:36:32.386577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.354 [2024-10-27 11:36:32.386591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:47.354 [2024-10-27 11:36:32.386598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:47.354 [2024-10-27 11:36:32.386611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:47.354 [2024-10-27 11:36:32.386617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:47.354 [2024-10-27 11:36:32.386630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:47.354 [2024-10-27 11:36:32.386637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:47.354 [2024-10-27 11:36:32.386643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:47.354 [2024-10-27 11:36:32.386650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:47.354 [2024-10-27 11:36:32.386656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:47.354 [2024-10-27 11:36:32.386663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:47.354 [2024-10-27 11:36:32.386676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:47.354 [2024-10-27 11:36:32.386683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386691] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:47.354 [2024-10-27 11:36:32.386699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:47.354 [2024-10-27 11:36:32.386707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:47.354 [2024-10-27 11:36:32.386715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:47.354 [2024-10-27 11:36:32.386723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:47.354 [2024-10-27 11:36:32.386730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:47.354 [2024-10-27 11:36:32.386736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:47.354 [2024-10-27 11:36:32.386745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:47.354 [2024-10-27 11:36:32.386753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:47.354 [2024-10-27 11:36:32.386760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:47.354 [2024-10-27 11:36:32.386769] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:47.354 [2024-10-27 11:36:32.386778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.354 [2024-10-27 11:36:32.386787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:47.354 [2024-10-27 11:36:32.386795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:47.354 [2024-10-27 11:36:32.386803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:47.354 [2024-10-27 11:36:32.386810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:47.354 [2024-10-27 11:36:32.386817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:47.354 [2024-10-27 11:36:32.386824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:47.354 [2024-10-27 11:36:32.386831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:47.354 [2024-10-27 11:36:32.386838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:47.354 [2024-10-27 11:36:32.386845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:47.354 [2024-10-27 11:36:32.386853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:47.354 [2024-10-27 11:36:32.386860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:47.355 [2024-10-27 11:36:32.386867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:47.355 [2024-10-27 11:36:32.386874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:47.355 [2024-10-27 11:36:32.386882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:47.355 [2024-10-27 11:36:32.386889] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:47.355 [2024-10-27 11:36:32.386897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:47.355 [2024-10-27 11:36:32.386908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:47.355 [2024-10-27 11:36:32.386915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:47.355 [2024-10-27 11:36:32.386923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:47.355 [2024-10-27 11:36:32.386930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:47.355 [2024-10-27 11:36:32.386937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.386945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:47.355 [2024-10-27 11:36:32.386953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:21:47.355 [2024-10-27 11:36:32.386961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.419378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.419425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:47.355 [2024-10-27 11:36:32.419438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.374 ms 00:21:47.355 [2024-10-27 11:36:32.419447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.419539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.419552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:47.355 [2024-10-27 11:36:32.419561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:47.355 [2024-10-27 11:36:32.419569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.470004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.470061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:47.355 [2024-10-27 11:36:32.470075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.377 ms 00:21:47.355 [2024-10-27 11:36:32.470085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.470135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.470145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:47.355 [2024-10-27 11:36:32.470155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:47.355 [2024-10-27 11:36:32.470166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.470828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.470869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:47.355 [2024-10-27 11:36:32.470880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:21:47.355 [2024-10-27 11:36:32.470888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.471049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.471065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:47.355 [2024-10-27 11:36:32.471075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:21:47.355 [2024-10-27 11:36:32.471084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.486801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.486843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:47.355 [2024-10-27 11:36:32.486855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.690 ms 00:21:47.355 [2024-10-27 11:36:32.486866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.501166] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:47.355 [2024-10-27 11:36:32.501214] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:47.355 [2024-10-27 11:36:32.501228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.501237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:47.355 [2024-10-27 11:36:32.501248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.249 ms 00:21:47.355 [2024-10-27 11:36:32.501256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.527087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.527146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:47.355 [2024-10-27 11:36:32.527158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.757 ms 00:21:47.355 [2024-10-27 11:36:32.527167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.540173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.540229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:47.355 [2024-10-27 11:36:32.540242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.970 ms 00:21:47.355 [2024-10-27 11:36:32.540249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.552865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.552911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:47.355 [2024-10-27 11:36:32.552923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.546 ms 00:21:47.355 [2024-10-27 11:36:32.552931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.553599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.553624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:47.355 [2024-10-27 11:36:32.553636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:21:47.355 [2024-10-27 11:36:32.553644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.618799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.355 [2024-10-27 11:36:32.618867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:47.355 [2024-10-27 11:36:32.618884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.132 ms 00:21:47.355 [2024-10-27 11:36:32.618901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.355 [2024-10-27 11:36:32.630029] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:47.617 [2024-10-27 11:36:32.633020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.617 [2024-10-27 11:36:32.633066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:47.617 [2024-10-27 11:36:32.633078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.061 ms 00:21:47.617 [2024-10-27 11:36:32.633087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.617 [2024-10-27 11:36:32.633174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.617 [2024-10-27 11:36:32.633186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:47.617 [2024-10-27 11:36:32.633196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:47.617 [2024-10-27 11:36:32.633205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.617 [2024-10-27 11:36:32.634943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.617 [2024-10-27 11:36:32.634992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:47.617 [2024-10-27 11:36:32.635003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:21:47.617 [2024-10-27 11:36:32.635013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.617 [2024-10-27 11:36:32.635043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.617 [2024-10-27 11:36:32.635052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:47.617 [2024-10-27 11:36:32.635061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:47.617 [2024-10-27 11:36:32.635069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.617 [2024-10-27 11:36:32.635111] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:47.617 [2024-10-27 11:36:32.635125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.617 [2024-10-27 11:36:32.635135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:47.617 [2024-10-27 11:36:32.635144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:47.617 [2024-10-27 11:36:32.635152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.617 [2024-10-27 11:36:32.661028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.617 [2024-10-27 11:36:32.661231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:47.617 [2024-10-27 11:36:32.661253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.857 ms 00:21:47.617 [2024-10-27 11:36:32.661263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.617 [2024-10-27 11:36:32.661381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.617 [2024-10-27 11:36:32.661393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:47.617 [2024-10-27 11:36:32.661403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:47.617 [2024-10-27 11:36:32.661411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.617 [2024-10-27 11:36:32.662672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.306 ms, result 0 00:21:48.668  [2024-10-27T11:36:34.893Z] Copying: 12/1024 [MB] (12 MBps) [2024-10-27T11:36:36.278Z] Copying: 28/1024 [MB] (15 MBps) [2024-10-27T11:36:37.223Z] Copying: 39/1024 [MB] (11 MBps) [2024-10-27T11:36:38.168Z] Copying: 50/1024 [MB] (10 MBps) [2024-10-27T11:36:39.109Z] Copying: 61/1024 [MB] (10 MBps) [2024-10-27T11:36:40.052Z] Copying: 79/1024 [MB] (18 MBps) [2024-10-27T11:36:40.996Z] Copying: 94/1024 [MB] (15 MBps) [2024-10-27T11:36:41.939Z] Copying: 110/1024 [MB] (16 MBps) [2024-10-27T11:36:42.883Z] Copying: 128/1024 [MB] (17 MBps) [2024-10-27T11:36:44.270Z] Copying: 142/1024 [MB] (14 MBps) [2024-10-27T11:36:45.215Z] Copying: 158/1024 [MB] (16 MBps) [2024-10-27T11:36:46.159Z] Copying: 171/1024 [MB] (12 MBps) [2024-10-27T11:36:47.102Z] Copying: 188/1024 [MB] (17 MBps) [2024-10-27T11:36:48.083Z] Copying: 203/1024 [MB] (15 MBps) [2024-10-27T11:36:49.027Z] Copying: 216/1024 [MB] (12 MBps) [2024-10-27T11:36:49.969Z] Copying: 236/1024 [MB] (19 MBps) [2024-10-27T11:36:50.913Z] Copying: 247/1024 [MB] (11 MBps) [2024-10-27T11:36:52.299Z] Copying: 264/1024 [MB] (17 MBps) [2024-10-27T11:36:52.871Z] Copying: 285/1024 [MB] (20 MBps) [2024-10-27T11:36:54.257Z] Copying: 301/1024 [MB] (16 MBps) [2024-10-27T11:36:55.199Z] Copying: 320/1024 [MB] (18 MBps) [2024-10-27T11:36:56.142Z] Copying: 339/1024 [MB] (19 MBps) [2024-10-27T11:36:57.084Z] Copying: 349/1024 [MB] (10 MBps) [2024-10-27T11:36:58.026Z] Copying: 360/1024 [MB] (10 MBps) [2024-10-27T11:36:58.970Z] Copying: 376/1024 [MB] (15 MBps) [2024-10-27T11:36:59.913Z] Copying: 391/1024 [MB] (15 MBps) [2024-10-27T11:37:01.298Z] Copying: 403/1024 [MB] (11 MBps) [2024-10-27T11:37:01.871Z] Copying: 416/1024 [MB] (13 MBps) [2024-10-27T11:37:03.255Z] Copying: 436/1024 [MB] (19 MBps) [2024-10-27T11:37:04.202Z] Copying: 457/1024 [MB] (21 MBps) [2024-10-27T11:37:05.144Z] Copying: 476/1024 [MB] (18 MBps) [2024-10-27T11:37:06.148Z] Copying: 498/1024 [MB] (21 MBps) [2024-10-27T11:37:07.091Z] Copying: 521/1024 [MB] (23 MBps) [2024-10-27T11:37:08.033Z] Copying: 542/1024 [MB] (20 MBps) [2024-10-27T11:37:08.978Z] Copying: 560/1024 [MB] (18 MBps) [2024-10-27T11:37:09.919Z] Copying: 582/1024 [MB] (22 MBps) [2024-10-27T11:37:10.862Z] Copying: 602/1024 [MB] (19 MBps) [2024-10-27T11:37:12.248Z] Copying: 621/1024 [MB] (19 MBps) [2024-10-27T11:37:13.190Z] Copying: 633/1024 [MB] (11 MBps) [2024-10-27T11:37:14.132Z] Copying: 648/1024 [MB] (15 MBps) [2024-10-27T11:37:15.074Z] Copying: 664/1024 [MB] (15 MBps) [2024-10-27T11:37:16.019Z] Copying: 674/1024 [MB] (10 MBps) [2024-10-27T11:37:16.963Z] Copying: 685/1024 [MB] (10 MBps) [2024-10-27T11:37:17.908Z] Copying: 696/1024 [MB] (11 MBps) [2024-10-27T11:37:19.295Z] Copying: 713/1024 [MB] (16 MBps) [2024-10-27T11:37:19.865Z] Copying: 723/1024 [MB] (10 MBps) [2024-10-27T11:37:21.250Z] Copying: 733/1024 [MB] (10 MBps) [2024-10-27T11:37:22.194Z] Copying: 744/1024 [MB] (10 MBps) [2024-10-27T11:37:23.138Z] Copying: 754/1024 [MB] (10 MBps) [2024-10-27T11:37:24.082Z] Copying: 765/1024 [MB] (10 MBps) [2024-10-27T11:37:25.025Z] Copying: 786/1024 [MB] (20 MBps) [2024-10-27T11:37:25.968Z] Copying: 798/1024 [MB] (12 MBps) [2024-10-27T11:37:26.911Z] Copying: 808/1024 [MB] (10 MBps) [2024-10-27T11:37:28.296Z] Copying: 819/1024 [MB] (10 MBps) [2024-10-27T11:37:28.867Z] Copying: 830/1024 [MB] (10 MBps) [2024-10-27T11:37:30.254Z] Copying: 841/1024 [MB] (10 MBps) [2024-10-27T11:37:31.198Z] Copying: 852/1024 [MB] (10 MBps) [2024-10-27T11:37:32.141Z] Copying: 871/1024 [MB] (18 MBps) [2024-10-27T11:37:33.080Z] Copying: 892/1024 [MB] (20 MBps) [2024-10-27T11:37:34.023Z] Copying: 914/1024 [MB] (22 MBps) [2024-10-27T11:37:34.965Z] Copying: 933/1024 [MB] (18 MBps) [2024-10-27T11:37:35.903Z] Copying: 947/1024 [MB] (13 MBps) [2024-10-27T11:37:37.365Z] Copying: 962/1024 [MB] (15 MBps) [2024-10-27T11:37:37.961Z] Copying: 980/1024 [MB] (17 MBps) [2024-10-27T11:37:38.905Z] Copying: 1000/1024 [MB] (20 MBps) [2024-10-27T11:37:39.476Z] Copying: 1015/1024 [MB] (14 MBps) [2024-10-27T11:37:39.476Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-10-27 11:37:39.259090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.195 [2024-10-27 11:37:39.259166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:54.195 [2024-10-27 11:37:39.259183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:54.195 [2024-10-27 11:37:39.259192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.195 [2024-10-27 11:37:39.259223] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:54.195 [2024-10-27 11:37:39.262360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.195 [2024-10-27 11:37:39.262397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:54.195 [2024-10-27 11:37:39.262409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:22:54.195 [2024-10-27 11:37:39.262419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.195 [2024-10-27 11:37:39.262869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.195 [2024-10-27 11:37:39.262880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:54.195 [2024-10-27 11:37:39.262892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:22:54.195 [2024-10-27 11:37:39.262900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.195 [2024-10-27 11:37:39.268832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.195 [2024-10-27 11:37:39.268890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:54.195 [2024-10-27 11:37:39.268902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.916 ms 00:22:54.195 [2024-10-27 11:37:39.268911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.195 [2024-10-27 11:37:39.275287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.195 [2024-10-27 11:37:39.275334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:54.195 [2024-10-27 11:37:39.275347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.317 ms 00:22:54.195 [2024-10-27 11:37:39.275355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.195 [2024-10-27 11:37:39.302096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.195 [2024-10-27 11:37:39.302135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:54.195 [2024-10-27 11:37:39.302149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.654 ms 00:22:54.195 [2024-10-27 11:37:39.302157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.195 [2024-10-27 11:37:39.318264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.195 [2024-10-27 11:37:39.318319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:54.195 [2024-10-27 11:37:39.318340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.057 ms 00:22:54.195 [2024-10-27 11:37:39.318348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.466 [2024-10-27 11:37:39.667478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.466 [2024-10-27 11:37:39.667670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:54.466 [2024-10-27 11:37:39.667745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 349.071 ms 00:22:54.466 [2024-10-27 11:37:39.667771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.466 [2024-10-27 11:37:39.693791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.466 [2024-10-27 11:37:39.693965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:54.466 [2024-10-27 11:37:39.694034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.985 ms 00:22:54.466 [2024-10-27 11:37:39.694059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.466 [2024-10-27 11:37:39.719864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.466 [2024-10-27 11:37:39.720022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:54.466 [2024-10-27 11:37:39.720167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.731 ms 00:22:54.466 [2024-10-27 11:37:39.720193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.731 [2024-10-27 11:37:39.745785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.731 [2024-10-27 11:37:39.745965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:54.731 [2024-10-27 11:37:39.746083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.544 ms 00:22:54.731 [2024-10-27 11:37:39.746110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.731 [2024-10-27 11:37:39.770990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.731 [2024-10-27 11:37:39.771164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:54.731 [2024-10-27 11:37:39.771232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.790 ms 00:22:54.731 [2024-10-27 11:37:39.771255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.731 [2024-10-27 11:37:39.771344] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:54.731 [2024-10-27 11:37:39.771378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:54.731 [2024-10-27 11:37:39.771412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.771489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.771520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.771549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.771579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.771638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.771701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.771754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.771785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.771815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:54.731 [2024-10-27 11:37:39.772813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.772994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:54.732 [2024-10-27 11:37:39.773095] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:54.732 [2024-10-27 11:37:39.773104] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8a78a63f-1220-4fa3-ad2e-cf0b6390ae8b 00:22:54.732 [2024-10-27 11:37:39.773112] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:54.732 [2024-10-27 11:37:39.773120] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 30656 00:22:54.732 [2024-10-27 11:37:39.773128] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 29696 00:22:54.732 [2024-10-27 11:37:39.773137] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0323 00:22:54.732 [2024-10-27 11:37:39.773145] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:54.732 [2024-10-27 11:37:39.773153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:54.732 [2024-10-27 11:37:39.773167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:54.732 [2024-10-27 11:37:39.773182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:54.732 [2024-10-27 11:37:39.773189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:54.732 [2024-10-27 11:37:39.773198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.732 [2024-10-27 11:37:39.773206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:54.732 [2024-10-27 11:37:39.773214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.855 ms 00:22:54.732 [2024-10-27 11:37:39.773222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.786615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.732 [2024-10-27 11:37:39.786780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:54.732 [2024-10-27 11:37:39.786798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.355 ms 00:22:54.732 [2024-10-27 11:37:39.786807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.787194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.732 [2024-10-27 11:37:39.787210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:54.732 [2024-10-27 11:37:39.787220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:22:54.732 [2024-10-27 11:37:39.787227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.823708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.823759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.732 [2024-10-27 11:37:39.823777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.823787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.823856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.823867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.732 [2024-10-27 11:37:39.823877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.823886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.823958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.823970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.732 [2024-10-27 11:37:39.823980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.823993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.824009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.824020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.732 [2024-10-27 11:37:39.824030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.824039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.909202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.909259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.732 [2024-10-27 11:37:39.909274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.909289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.979723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.979780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.732 [2024-10-27 11:37:39.979792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.979801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.979877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.979888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.732 [2024-10-27 11:37:39.979898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.979907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.979954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.979964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.732 [2024-10-27 11:37:39.979973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.979982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.980083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.980094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.732 [2024-10-27 11:37:39.980102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.980110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.980142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.980155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:54.732 [2024-10-27 11:37:39.980164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.980172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.980214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.980224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.732 [2024-10-27 11:37:39.980233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.980241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.980318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.732 [2024-10-27 11:37:39.980330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.732 [2024-10-27 11:37:39.980340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.732 [2024-10-27 11:37:39.980348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.732 [2024-10-27 11:37:39.980483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 721.354 ms, result 0 00:22:55.672 00:22:55.672 00:22:55.672 11:37:40 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:58.215 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:58.215 11:37:42 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:58.215 11:37:42 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:58.215 11:37:42 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:58.215 11:37:43 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:58.215 11:37:43 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:58.215 Process with pid 74358 is not found 00:22:58.215 Remove shared memory files 00:22:58.215 11:37:43 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74358 00:22:58.215 11:37:43 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74358 ']' 00:22:58.215 11:37:43 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74358 00:22:58.215 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74358) - No such process 00:22:58.215 11:37:43 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74358 is not found' 00:22:58.215 11:37:43 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:58.215 11:37:43 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:58.215 11:37:43 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:58.215 11:37:43 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:58.215 11:37:43 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:58.215 11:37:43 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:58.215 11:37:43 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:58.215 ************************************ 00:22:58.215 END TEST ftl_restore 00:22:58.215 ************************************ 00:22:58.215 00:22:58.215 real 4m52.173s 00:22:58.215 user 4m40.078s 00:22:58.215 sys 0m11.739s 00:22:58.215 11:37:43 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:58.215 11:37:43 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:58.215 11:37:43 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:58.215 11:37:43 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:58.215 11:37:43 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:58.215 11:37:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:58.215 ************************************ 00:22:58.215 START TEST ftl_dirty_shutdown 00:22:58.215 ************************************ 00:22:58.215 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:58.215 * Looking for test storage... 00:22:58.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.215 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:58.215 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:22:58.215 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:58.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.216 --rc genhtml_branch_coverage=1 00:22:58.216 --rc genhtml_function_coverage=1 00:22:58.216 --rc genhtml_legend=1 00:22:58.216 --rc geninfo_all_blocks=1 00:22:58.216 --rc geninfo_unexecuted_blocks=1 00:22:58.216 00:22:58.216 ' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:58.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.216 --rc genhtml_branch_coverage=1 00:22:58.216 --rc genhtml_function_coverage=1 00:22:58.216 --rc genhtml_legend=1 00:22:58.216 --rc geninfo_all_blocks=1 00:22:58.216 --rc geninfo_unexecuted_blocks=1 00:22:58.216 00:22:58.216 ' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:58.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.216 --rc genhtml_branch_coverage=1 00:22:58.216 --rc genhtml_function_coverage=1 00:22:58.216 --rc genhtml_legend=1 00:22:58.216 --rc geninfo_all_blocks=1 00:22:58.216 --rc geninfo_unexecuted_blocks=1 00:22:58.216 00:22:58.216 ' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:58.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.216 --rc genhtml_branch_coverage=1 00:22:58.216 --rc genhtml_function_coverage=1 00:22:58.216 --rc genhtml_legend=1 00:22:58.216 --rc geninfo_all_blocks=1 00:22:58.216 --rc geninfo_unexecuted_blocks=1 00:22:58.216 00:22:58.216 ' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77454 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77454 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77454 ']' 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.216 11:37:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:58.216 [2024-10-27 11:37:43.382846] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:22:58.216 [2024-10-27 11:37:43.383204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77454 ] 00:22:58.476 [2024-10-27 11:37:43.545729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.476 [2024-10-27 11:37:43.666716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:59.415 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:59.675 { 00:22:59.675 "name": "nvme0n1", 00:22:59.675 "aliases": [ 00:22:59.675 "2a957332-a76b-4d47-94d3-293afcc842de" 00:22:59.675 ], 00:22:59.675 "product_name": "NVMe disk", 00:22:59.675 "block_size": 4096, 00:22:59.675 "num_blocks": 1310720, 00:22:59.675 "uuid": "2a957332-a76b-4d47-94d3-293afcc842de", 00:22:59.675 "numa_id": -1, 00:22:59.675 "assigned_rate_limits": { 00:22:59.675 "rw_ios_per_sec": 0, 00:22:59.675 "rw_mbytes_per_sec": 0, 00:22:59.675 "r_mbytes_per_sec": 0, 00:22:59.675 "w_mbytes_per_sec": 0 00:22:59.675 }, 00:22:59.675 "claimed": true, 00:22:59.675 "claim_type": "read_many_write_one", 00:22:59.675 "zoned": false, 00:22:59.675 "supported_io_types": { 00:22:59.675 "read": true, 00:22:59.675 "write": true, 00:22:59.675 "unmap": true, 00:22:59.675 "flush": true, 00:22:59.675 "reset": true, 00:22:59.675 "nvme_admin": true, 00:22:59.675 "nvme_io": true, 00:22:59.675 "nvme_io_md": false, 00:22:59.675 "write_zeroes": true, 00:22:59.675 "zcopy": false, 00:22:59.675 "get_zone_info": false, 00:22:59.675 "zone_management": false, 00:22:59.675 "zone_append": false, 00:22:59.675 "compare": true, 00:22:59.675 "compare_and_write": false, 00:22:59.675 "abort": true, 00:22:59.675 "seek_hole": false, 00:22:59.675 "seek_data": false, 00:22:59.675 "copy": true, 00:22:59.675 "nvme_iov_md": false 00:22:59.675 }, 00:22:59.675 "driver_specific": { 00:22:59.675 "nvme": [ 00:22:59.675 { 00:22:59.675 "pci_address": "0000:00:11.0", 00:22:59.675 "trid": { 00:22:59.675 "trtype": "PCIe", 00:22:59.675 "traddr": "0000:00:11.0" 00:22:59.675 }, 00:22:59.675 "ctrlr_data": { 00:22:59.675 "cntlid": 0, 00:22:59.675 "vendor_id": "0x1b36", 00:22:59.675 "model_number": "QEMU NVMe Ctrl", 00:22:59.675 "serial_number": "12341", 00:22:59.675 "firmware_revision": "8.0.0", 00:22:59.675 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:59.675 "oacs": { 00:22:59.675 "security": 0, 00:22:59.675 "format": 1, 00:22:59.675 "firmware": 0, 00:22:59.675 "ns_manage": 1 00:22:59.675 }, 00:22:59.675 "multi_ctrlr": false, 00:22:59.675 "ana_reporting": false 00:22:59.675 }, 00:22:59.675 "vs": { 00:22:59.675 "nvme_version": "1.4" 00:22:59.675 }, 00:22:59.675 "ns_data": { 00:22:59.675 "id": 1, 00:22:59.675 "can_share": false 00:22:59.675 } 00:22:59.675 } 00:22:59.675 ], 00:22:59.675 "mp_policy": "active_passive" 00:22:59.675 } 00:22:59.675 } 00:22:59.675 ]' 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:59.675 11:37:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:59.935 11:37:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=1f60003e-983a-4f8e-bd1b-7b74a528446b 00:22:59.935 11:37:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:59.935 11:37:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1f60003e-983a-4f8e-bd1b-7b74a528446b 00:23:00.195 11:37:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:00.455 11:37:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=b62ad7ec-197d-4346-9b2b-193c9c93cd15 00:23:00.455 11:37:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b62ad7ec-197d-4346-9b2b-193c9c93cd15 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=05be9b1d-515e-4045-9e49-efab71335f53 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 05be9b1d-515e-4045-9e49-efab71335f53 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=05be9b1d-515e-4045-9e49-efab71335f53 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 05be9b1d-515e-4045-9e49-efab71335f53 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=05be9b1d-515e-4045-9e49-efab71335f53 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:00.715 11:37:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05be9b1d-515e-4045-9e49-efab71335f53 00:23:00.975 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:00.975 { 00:23:00.975 "name": "05be9b1d-515e-4045-9e49-efab71335f53", 00:23:00.975 "aliases": [ 00:23:00.975 "lvs/nvme0n1p0" 00:23:00.975 ], 00:23:00.975 "product_name": "Logical Volume", 00:23:00.975 "block_size": 4096, 00:23:00.975 "num_blocks": 26476544, 00:23:00.975 "uuid": "05be9b1d-515e-4045-9e49-efab71335f53", 00:23:00.975 "assigned_rate_limits": { 00:23:00.975 "rw_ios_per_sec": 0, 00:23:00.975 "rw_mbytes_per_sec": 0, 00:23:00.975 "r_mbytes_per_sec": 0, 00:23:00.975 "w_mbytes_per_sec": 0 00:23:00.975 }, 00:23:00.975 "claimed": false, 00:23:00.975 "zoned": false, 00:23:00.975 "supported_io_types": { 00:23:00.975 "read": true, 00:23:00.975 "write": true, 00:23:00.975 "unmap": true, 00:23:00.975 "flush": false, 00:23:00.975 "reset": true, 00:23:00.975 "nvme_admin": false, 00:23:00.975 "nvme_io": false, 00:23:00.975 "nvme_io_md": false, 00:23:00.975 "write_zeroes": true, 00:23:00.975 "zcopy": false, 00:23:00.975 "get_zone_info": false, 00:23:00.975 "zone_management": false, 00:23:00.975 "zone_append": false, 00:23:00.975 "compare": false, 00:23:00.975 "compare_and_write": false, 00:23:00.975 "abort": false, 00:23:00.975 "seek_hole": true, 00:23:00.975 "seek_data": true, 00:23:00.975 "copy": false, 00:23:00.975 "nvme_iov_md": false 00:23:00.975 }, 00:23:00.975 "driver_specific": { 00:23:00.975 "lvol": { 00:23:00.975 "lvol_store_uuid": "b62ad7ec-197d-4346-9b2b-193c9c93cd15", 00:23:00.975 "base_bdev": "nvme0n1", 00:23:00.975 "thin_provision": true, 00:23:00.975 "num_allocated_clusters": 0, 00:23:00.975 "snapshot": false, 00:23:00.975 "clone": false, 00:23:00.975 "esnap_clone": false 00:23:00.975 } 00:23:00.975 } 00:23:00.975 } 00:23:00.975 ]' 00:23:00.975 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:00.975 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:00.975 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:00.975 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:00.975 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:00.975 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:00.975 11:37:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:00.975 11:37:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:00.975 11:37:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:01.235 11:37:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:01.235 11:37:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:01.235 11:37:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 05be9b1d-515e-4045-9e49-efab71335f53 00:23:01.235 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=05be9b1d-515e-4045-9e49-efab71335f53 00:23:01.235 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:01.235 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:01.235 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:01.235 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05be9b1d-515e-4045-9e49-efab71335f53 00:23:01.495 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:01.495 { 00:23:01.495 "name": "05be9b1d-515e-4045-9e49-efab71335f53", 00:23:01.495 "aliases": [ 00:23:01.495 "lvs/nvme0n1p0" 00:23:01.495 ], 00:23:01.495 "product_name": "Logical Volume", 00:23:01.495 "block_size": 4096, 00:23:01.495 "num_blocks": 26476544, 00:23:01.495 "uuid": "05be9b1d-515e-4045-9e49-efab71335f53", 00:23:01.495 "assigned_rate_limits": { 00:23:01.495 "rw_ios_per_sec": 0, 00:23:01.495 "rw_mbytes_per_sec": 0, 00:23:01.495 "r_mbytes_per_sec": 0, 00:23:01.495 "w_mbytes_per_sec": 0 00:23:01.495 }, 00:23:01.495 "claimed": false, 00:23:01.495 "zoned": false, 00:23:01.495 "supported_io_types": { 00:23:01.495 "read": true, 00:23:01.495 "write": true, 00:23:01.495 "unmap": true, 00:23:01.495 "flush": false, 00:23:01.495 "reset": true, 00:23:01.495 "nvme_admin": false, 00:23:01.495 "nvme_io": false, 00:23:01.495 "nvme_io_md": false, 00:23:01.495 "write_zeroes": true, 00:23:01.495 "zcopy": false, 00:23:01.495 "get_zone_info": false, 00:23:01.495 "zone_management": false, 00:23:01.495 "zone_append": false, 00:23:01.495 "compare": false, 00:23:01.495 "compare_and_write": false, 00:23:01.495 "abort": false, 00:23:01.495 "seek_hole": true, 00:23:01.495 "seek_data": true, 00:23:01.495 "copy": false, 00:23:01.495 "nvme_iov_md": false 00:23:01.495 }, 00:23:01.495 "driver_specific": { 00:23:01.495 "lvol": { 00:23:01.495 "lvol_store_uuid": "b62ad7ec-197d-4346-9b2b-193c9c93cd15", 00:23:01.495 "base_bdev": "nvme0n1", 00:23:01.495 "thin_provision": true, 00:23:01.495 "num_allocated_clusters": 0, 00:23:01.495 "snapshot": false, 00:23:01.495 "clone": false, 00:23:01.495 "esnap_clone": false 00:23:01.495 } 00:23:01.495 } 00:23:01.495 } 00:23:01.495 ]' 00:23:01.495 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:01.495 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:01.495 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:01.495 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:01.495 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:01.495 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:01.495 11:37:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:01.495 11:37:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:01.754 11:37:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:01.754 11:37:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 05be9b1d-515e-4045-9e49-efab71335f53 00:23:01.754 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=05be9b1d-515e-4045-9e49-efab71335f53 00:23:01.755 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:01.755 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:01.755 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:01.755 11:37:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 05be9b1d-515e-4045-9e49-efab71335f53 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:02.015 { 00:23:02.015 "name": "05be9b1d-515e-4045-9e49-efab71335f53", 00:23:02.015 "aliases": [ 00:23:02.015 "lvs/nvme0n1p0" 00:23:02.015 ], 00:23:02.015 "product_name": "Logical Volume", 00:23:02.015 "block_size": 4096, 00:23:02.015 "num_blocks": 26476544, 00:23:02.015 "uuid": "05be9b1d-515e-4045-9e49-efab71335f53", 00:23:02.015 "assigned_rate_limits": { 00:23:02.015 "rw_ios_per_sec": 0, 00:23:02.015 "rw_mbytes_per_sec": 0, 00:23:02.015 "r_mbytes_per_sec": 0, 00:23:02.015 "w_mbytes_per_sec": 0 00:23:02.015 }, 00:23:02.015 "claimed": false, 00:23:02.015 "zoned": false, 00:23:02.015 "supported_io_types": { 00:23:02.015 "read": true, 00:23:02.015 "write": true, 00:23:02.015 "unmap": true, 00:23:02.015 "flush": false, 00:23:02.015 "reset": true, 00:23:02.015 "nvme_admin": false, 00:23:02.015 "nvme_io": false, 00:23:02.015 "nvme_io_md": false, 00:23:02.015 "write_zeroes": true, 00:23:02.015 "zcopy": false, 00:23:02.015 "get_zone_info": false, 00:23:02.015 "zone_management": false, 00:23:02.015 "zone_append": false, 00:23:02.015 "compare": false, 00:23:02.015 "compare_and_write": false, 00:23:02.015 "abort": false, 00:23:02.015 "seek_hole": true, 00:23:02.015 "seek_data": true, 00:23:02.015 "copy": false, 00:23:02.015 "nvme_iov_md": false 00:23:02.015 }, 00:23:02.015 "driver_specific": { 00:23:02.015 "lvol": { 00:23:02.015 "lvol_store_uuid": "b62ad7ec-197d-4346-9b2b-193c9c93cd15", 00:23:02.015 "base_bdev": "nvme0n1", 00:23:02.015 "thin_provision": true, 00:23:02.015 "num_allocated_clusters": 0, 00:23:02.015 "snapshot": false, 00:23:02.015 "clone": false, 00:23:02.015 "esnap_clone": false 00:23:02.015 } 00:23:02.015 } 00:23:02.015 } 00:23:02.015 ]' 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 05be9b1d-515e-4045-9e49-efab71335f53 --l2p_dram_limit 10' 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:02.015 11:37:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 05be9b1d-515e-4045-9e49-efab71335f53 --l2p_dram_limit 10 -c nvc0n1p0 00:23:02.276 [2024-10-27 11:37:47.311513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.276 [2024-10-27 11:37:47.311642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:02.276 [2024-10-27 11:37:47.311661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:02.276 [2024-10-27 11:37:47.311669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.276 [2024-10-27 11:37:47.311721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.276 [2024-10-27 11:37:47.311729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.276 [2024-10-27 11:37:47.311738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:02.276 [2024-10-27 11:37:47.311758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.276 [2024-10-27 11:37:47.311779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:02.276 [2024-10-27 11:37:47.312347] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:02.276 [2024-10-27 11:37:47.312362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.276 [2024-10-27 11:37:47.312368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.276 [2024-10-27 11:37:47.312376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:23:02.276 [2024-10-27 11:37:47.312382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.276 [2024-10-27 11:37:47.312434] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID adc2a391-0658-4d35-80d0-3e1990da8c64 00:23:02.276 [2024-10-27 11:37:47.313418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.276 [2024-10-27 11:37:47.313440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:02.276 [2024-10-27 11:37:47.313447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:02.276 [2024-10-27 11:37:47.313454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.276 [2024-10-27 11:37:47.318187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.276 [2024-10-27 11:37:47.318217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.276 [2024-10-27 11:37:47.318225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.699 ms 00:23:02.276 [2024-10-27 11:37:47.318234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.276 [2024-10-27 11:37:47.318313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.276 [2024-10-27 11:37:47.318322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.276 [2024-10-27 11:37:47.318329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:02.276 [2024-10-27 11:37:47.318339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.276 [2024-10-27 11:37:47.318387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.276 [2024-10-27 11:37:47.318397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:02.276 [2024-10-27 11:37:47.318403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:02.276 [2024-10-27 11:37:47.318410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.276 [2024-10-27 11:37:47.318428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:02.276 [2024-10-27 11:37:47.321286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.276 [2024-10-27 11:37:47.321322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.276 [2024-10-27 11:37:47.321332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:23:02.276 [2024-10-27 11:37:47.321340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.276 [2024-10-27 11:37:47.321367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.276 [2024-10-27 11:37:47.321374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:02.276 [2024-10-27 11:37:47.321381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:02.276 [2024-10-27 11:37:47.321387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.276 [2024-10-27 11:37:47.321407] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:02.276 [2024-10-27 11:37:47.321511] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:02.276 [2024-10-27 11:37:47.321523] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:02.276 [2024-10-27 11:37:47.321531] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:02.276 [2024-10-27 11:37:47.321540] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:02.276 [2024-10-27 11:37:47.321547] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:02.276 [2024-10-27 11:37:47.321554] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:02.276 [2024-10-27 11:37:47.321560] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:02.276 [2024-10-27 11:37:47.321566] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:02.276 [2024-10-27 11:37:47.321572] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:02.276 [2024-10-27 11:37:47.321580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.276 [2024-10-27 11:37:47.321586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:02.276 [2024-10-27 11:37:47.321593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:23:02.276 [2024-10-27 11:37:47.321603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.276 [2024-10-27 11:37:47.321668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.276 [2024-10-27 11:37:47.321674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:02.276 [2024-10-27 11:37:47.321681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:02.276 [2024-10-27 11:37:47.321687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.276 [2024-10-27 11:37:47.321760] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:02.276 [2024-10-27 11:37:47.321769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:02.276 [2024-10-27 11:37:47.321776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.276 [2024-10-27 11:37:47.321782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.276 [2024-10-27 11:37:47.321789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:02.276 [2024-10-27 11:37:47.321794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:02.276 [2024-10-27 11:37:47.321800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:02.276 [2024-10-27 11:37:47.321805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:02.276 [2024-10-27 11:37:47.321811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:02.276 [2024-10-27 11:37:47.321816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.276 [2024-10-27 11:37:47.321823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:02.276 [2024-10-27 11:37:47.321829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:02.276 [2024-10-27 11:37:47.321835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.276 [2024-10-27 11:37:47.321840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:02.276 [2024-10-27 11:37:47.321846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:02.276 [2024-10-27 11:37:47.321851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.276 [2024-10-27 11:37:47.321859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:02.276 [2024-10-27 11:37:47.321864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:02.276 [2024-10-27 11:37:47.321870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.276 [2024-10-27 11:37:47.321875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:02.276 [2024-10-27 11:37:47.321882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:02.276 [2024-10-27 11:37:47.321887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.276 [2024-10-27 11:37:47.321897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:02.276 [2024-10-27 11:37:47.321902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:02.276 [2024-10-27 11:37:47.321909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.276 [2024-10-27 11:37:47.321914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:02.276 [2024-10-27 11:37:47.321920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:02.277 [2024-10-27 11:37:47.321925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.277 [2024-10-27 11:37:47.321931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:02.277 [2024-10-27 11:37:47.321936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:02.277 [2024-10-27 11:37:47.321942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.277 [2024-10-27 11:37:47.321947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:02.277 [2024-10-27 11:37:47.321955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:02.277 [2024-10-27 11:37:47.321960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.277 [2024-10-27 11:37:47.321967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:02.277 [2024-10-27 11:37:47.321971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:02.277 [2024-10-27 11:37:47.321977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.277 [2024-10-27 11:37:47.321982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:02.277 [2024-10-27 11:37:47.321989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:02.277 [2024-10-27 11:37:47.321994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.277 [2024-10-27 11:37:47.322000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:02.277 [2024-10-27 11:37:47.322005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:02.277 [2024-10-27 11:37:47.322011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.277 [2024-10-27 11:37:47.322016] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:02.277 [2024-10-27 11:37:47.322023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:02.277 [2024-10-27 11:37:47.322028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.277 [2024-10-27 11:37:47.322035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.277 [2024-10-27 11:37:47.322041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:02.277 [2024-10-27 11:37:47.322050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:02.277 [2024-10-27 11:37:47.322055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:02.277 [2024-10-27 11:37:47.322062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:02.277 [2024-10-27 11:37:47.322066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:02.277 [2024-10-27 11:37:47.322072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:02.277 [2024-10-27 11:37:47.322080] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:02.277 [2024-10-27 11:37:47.322090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.277 [2024-10-27 11:37:47.322096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:02.277 [2024-10-27 11:37:47.322103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:02.277 [2024-10-27 11:37:47.322109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:02.277 [2024-10-27 11:37:47.322115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:02.277 [2024-10-27 11:37:47.322121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:02.277 [2024-10-27 11:37:47.322127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:02.277 [2024-10-27 11:37:47.322133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:02.277 [2024-10-27 11:37:47.322139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:02.277 [2024-10-27 11:37:47.322144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:02.277 [2024-10-27 11:37:47.322152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:02.277 [2024-10-27 11:37:47.322158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:02.277 [2024-10-27 11:37:47.322164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:02.277 [2024-10-27 11:37:47.322170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:02.277 [2024-10-27 11:37:47.322176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:02.277 [2024-10-27 11:37:47.322182] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:02.277 [2024-10-27 11:37:47.322189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.277 [2024-10-27 11:37:47.322197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.277 [2024-10-27 11:37:47.322204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:02.277 [2024-10-27 11:37:47.322209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:02.277 [2024-10-27 11:37:47.322217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:02.277 [2024-10-27 11:37:47.322223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.277 [2024-10-27 11:37:47.322230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:02.277 [2024-10-27 11:37:47.322235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:23:02.277 [2024-10-27 11:37:47.322242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.277 [2024-10-27 11:37:47.322281] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:02.277 [2024-10-27 11:37:47.322291] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:06.474 [2024-10-27 11:37:51.209919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.474 [2024-10-27 11:37:51.210021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:06.474 [2024-10-27 11:37:51.210042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3887.618 ms 00:23:06.474 [2024-10-27 11:37:51.210054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.474 [2024-10-27 11:37:51.241673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.474 [2024-10-27 11:37:51.241923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:06.474 [2024-10-27 11:37:51.241946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.339 ms 00:23:06.474 [2024-10-27 11:37:51.241957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.474 [2024-10-27 11:37:51.242101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.474 [2024-10-27 11:37:51.242115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:06.474 [2024-10-27 11:37:51.242125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:06.474 [2024-10-27 11:37:51.242139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.474 [2024-10-27 11:37:51.277277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.474 [2024-10-27 11:37:51.277338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:06.474 [2024-10-27 11:37:51.277350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.086 ms 00:23:06.474 [2024-10-27 11:37:51.277360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.474 [2024-10-27 11:37:51.277394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.474 [2024-10-27 11:37:51.277407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:06.474 [2024-10-27 11:37:51.277416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:06.474 [2024-10-27 11:37:51.277430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.474 [2024-10-27 11:37:51.277981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.278008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:06.475 [2024-10-27 11:37:51.278018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:23:06.475 [2024-10-27 11:37:51.278028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.278140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.278152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:06.475 [2024-10-27 11:37:51.278161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:23:06.475 [2024-10-27 11:37:51.278174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.295416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.295463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:06.475 [2024-10-27 11:37:51.295474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.219 ms 00:23:06.475 [2024-10-27 11:37:51.295487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.308565] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:06.475 [2024-10-27 11:37:51.312524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.312566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:06.475 [2024-10-27 11:37:51.312580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.949 ms 00:23:06.475 [2024-10-27 11:37:51.312588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.417377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.417611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:06.475 [2024-10-27 11:37:51.417643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.750 ms 00:23:06.475 [2024-10-27 11:37:51.417654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.417858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.417871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:06.475 [2024-10-27 11:37:51.417886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:23:06.475 [2024-10-27 11:37:51.417898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.443529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.443578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:06.475 [2024-10-27 11:37:51.443594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.570 ms 00:23:06.475 [2024-10-27 11:37:51.443603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.468259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.468320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:06.475 [2024-10-27 11:37:51.468337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.600 ms 00:23:06.475 [2024-10-27 11:37:51.468344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.469016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.469115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:06.475 [2024-10-27 11:37:51.469128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:23:06.475 [2024-10-27 11:37:51.469136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.557949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.558004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:06.475 [2024-10-27 11:37:51.558025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.762 ms 00:23:06.475 [2024-10-27 11:37:51.558034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.585640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.585691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:06.475 [2024-10-27 11:37:51.585711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.469 ms 00:23:06.475 [2024-10-27 11:37:51.585720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.611209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.611254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:06.475 [2024-10-27 11:37:51.611270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.432 ms 00:23:06.475 [2024-10-27 11:37:51.611278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.637637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.637683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:06.475 [2024-10-27 11:37:51.637699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.270 ms 00:23:06.475 [2024-10-27 11:37:51.637708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.637763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.637773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:06.475 [2024-10-27 11:37:51.637788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:06.475 [2024-10-27 11:37:51.637796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.637892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.475 [2024-10-27 11:37:51.637903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:06.475 [2024-10-27 11:37:51.637914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:06.475 [2024-10-27 11:37:51.637921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.475 [2024-10-27 11:37:51.639157] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4327.062 ms, result 0 00:23:06.475 { 00:23:06.475 "name": "ftl0", 00:23:06.475 "uuid": "adc2a391-0658-4d35-80d0-3e1990da8c64" 00:23:06.475 } 00:23:06.475 11:37:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:06.475 11:37:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:06.734 11:37:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:06.734 11:37:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:06.734 11:37:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:06.995 /dev/nbd0 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:06.995 1+0 records in 00:23:06.995 1+0 records out 00:23:06.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037543 s, 10.9 MB/s 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:23:06.995 11:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:06.995 [2024-10-27 11:37:52.115518] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:23:06.995 [2024-10-27 11:37:52.115605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77605 ] 00:23:06.995 [2024-10-27 11:37:52.270833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.255 [2024-10-27 11:37:52.365689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.635  [2024-10-27T11:37:54.857Z] Copying: 193/1024 [MB] (193 MBps) [2024-10-27T11:37:55.797Z] Copying: 380/1024 [MB] (187 MBps) [2024-10-27T11:37:56.735Z] Copying: 567/1024 [MB] (186 MBps) [2024-10-27T11:37:57.672Z] Copying: 759/1024 [MB] (192 MBps) [2024-10-27T11:37:57.672Z] Copying: 1009/1024 [MB] (250 MBps) [2024-10-27T11:37:58.241Z] Copying: 1024/1024 [MB] (average 202 MBps) 00:23:12.960 00:23:13.220 11:37:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:15.126 11:38:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:15.126 [2024-10-27 11:38:00.305283] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:23:15.126 [2024-10-27 11:38:00.305384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77692 ] 00:23:15.384 [2024-10-27 11:38:00.459911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.384 [2024-10-27 11:38:00.552779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.763  [2024-10-27T11:38:02.984Z] Copying: 23/1024 [MB] (23 MBps) [2024-10-27T11:38:03.924Z] Copying: 43/1024 [MB] (19 MBps) [2024-10-27T11:38:04.864Z] Copying: 66/1024 [MB] (22 MBps) [2024-10-27T11:38:05.807Z] Copying: 93/1024 [MB] (27 MBps) [2024-10-27T11:38:07.192Z] Copying: 121/1024 [MB] (28 MBps) [2024-10-27T11:38:08.132Z] Copying: 152/1024 [MB] (30 MBps) [2024-10-27T11:38:09.076Z] Copying: 174/1024 [MB] (22 MBps) [2024-10-27T11:38:10.058Z] Copying: 192/1024 [MB] (17 MBps) [2024-10-27T11:38:11.053Z] Copying: 205992/1048576 [kB] (9216 kBps) [2024-10-27T11:38:11.993Z] Copying: 215/1024 [MB] (14 MBps) [2024-10-27T11:38:12.926Z] Copying: 243/1024 [MB] (27 MBps) [2024-10-27T11:38:13.862Z] Copying: 263/1024 [MB] (20 MBps) [2024-10-27T11:38:14.794Z] Copying: 279/1024 [MB] (16 MBps) [2024-10-27T11:38:16.167Z] Copying: 298/1024 [MB] (18 MBps) [2024-10-27T11:38:17.101Z] Copying: 316/1024 [MB] (17 MBps) [2024-10-27T11:38:18.036Z] Copying: 346/1024 [MB] (30 MBps) [2024-10-27T11:38:18.969Z] Copying: 364/1024 [MB] (17 MBps) [2024-10-27T11:38:19.912Z] Copying: 380/1024 [MB] (15 MBps) [2024-10-27T11:38:20.848Z] Copying: 406/1024 [MB] (26 MBps) [2024-10-27T11:38:21.783Z] Copying: 437/1024 [MB] (31 MBps) [2024-10-27T11:38:23.157Z] Copying: 453/1024 [MB] (15 MBps) [2024-10-27T11:38:24.090Z] Copying: 472/1024 [MB] (18 MBps) [2024-10-27T11:38:25.024Z] Copying: 491/1024 [MB] (19 MBps) [2024-10-27T11:38:25.958Z] Copying: 511/1024 [MB] (19 MBps) [2024-10-27T11:38:26.892Z] Copying: 525/1024 [MB] (14 MBps) [2024-10-27T11:38:27.826Z] Copying: 546/1024 [MB] (20 MBps) [2024-10-27T11:38:29.203Z] Copying: 571/1024 [MB] (24 MBps) [2024-10-27T11:38:29.769Z] Copying: 597/1024 [MB] (25 MBps) [2024-10-27T11:38:31.145Z] Copying: 611/1024 [MB] (14 MBps) [2024-10-27T11:38:32.078Z] Copying: 630/1024 [MB] (18 MBps) [2024-10-27T11:38:33.013Z] Copying: 648/1024 [MB] (18 MBps) [2024-10-27T11:38:33.947Z] Copying: 676/1024 [MB] (28 MBps) [2024-10-27T11:38:34.881Z] Copying: 711/1024 [MB] (35 MBps) [2024-10-27T11:38:35.815Z] Copying: 743/1024 [MB] (31 MBps) [2024-10-27T11:38:37.189Z] Copying: 763/1024 [MB] (19 MBps) [2024-10-27T11:38:38.123Z] Copying: 778/1024 [MB] (15 MBps) [2024-10-27T11:38:39.058Z] Copying: 793/1024 [MB] (14 MBps) [2024-10-27T11:38:39.991Z] Copying: 815/1024 [MB] (21 MBps) [2024-10-27T11:38:40.925Z] Copying: 851/1024 [MB] (36 MBps) [2024-10-27T11:38:41.924Z] Copying: 871/1024 [MB] (20 MBps) [2024-10-27T11:38:42.859Z] Copying: 885/1024 [MB] (14 MBps) [2024-10-27T11:38:43.794Z] Copying: 900/1024 [MB] (15 MBps) [2024-10-27T11:38:45.165Z] Copying: 934/1024 [MB] (33 MBps) [2024-10-27T11:38:46.095Z] Copying: 965/1024 [MB] (31 MBps) [2024-10-27T11:38:47.030Z] Copying: 982/1024 [MB] (16 MBps) [2024-10-27T11:38:47.290Z] Copying: 1015/1024 [MB] (33 MBps) [2024-10-27T11:38:48.224Z] Copying: 1024/1024 [MB] (average 22 MBps) 00:24:02.943 00:24:02.943 11:38:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:02.943 11:38:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:02.943 11:38:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:03.205 [2024-10-27 11:38:48.227499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.205 [2024-10-27 11:38:48.227546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:03.205 [2024-10-27 11:38:48.227559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:03.205 [2024-10-27 11:38:48.227570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.205 [2024-10-27 11:38:48.227592] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:03.205 [2024-10-27 11:38:48.230167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.205 [2024-10-27 11:38:48.230291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:03.205 [2024-10-27 11:38:48.230321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:24:03.205 [2024-10-27 11:38:48.230329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.205 [2024-10-27 11:38:48.232900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.205 [2024-10-27 11:38:48.232931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:03.205 [2024-10-27 11:38:48.232942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.541 ms 00:24:03.205 [2024-10-27 11:38:48.232949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.205 [2024-10-27 11:38:48.250896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.205 [2024-10-27 11:38:48.251009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:03.205 [2024-10-27 11:38:48.251068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.917 ms 00:24:03.205 [2024-10-27 11:38:48.251094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.205 [2024-10-27 11:38:48.257269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.205 [2024-10-27 11:38:48.257378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:03.205 [2024-10-27 11:38:48.257432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.130 ms 00:24:03.205 [2024-10-27 11:38:48.257459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.205 [2024-10-27 11:38:48.281450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.205 [2024-10-27 11:38:48.281558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:03.205 [2024-10-27 11:38:48.281576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.902 ms 00:24:03.205 [2024-10-27 11:38:48.281583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.205 [2024-10-27 11:38:48.296699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.205 [2024-10-27 11:38:48.296812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:03.205 [2024-10-27 11:38:48.296832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.079 ms 00:24:03.205 [2024-10-27 11:38:48.296841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.205 [2024-10-27 11:38:48.297002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.205 [2024-10-27 11:38:48.297014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:03.205 [2024-10-27 11:38:48.297026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:24:03.205 [2024-10-27 11:38:48.297034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.205 [2024-10-27 11:38:48.320634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.205 [2024-10-27 11:38:48.320666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:03.205 [2024-10-27 11:38:48.320678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.582 ms 00:24:03.205 [2024-10-27 11:38:48.320685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.205 [2024-10-27 11:38:48.339775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.205 [2024-10-27 11:38:48.339869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:03.205 [2024-10-27 11:38:48.339883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.055 ms 00:24:03.205 [2024-10-27 11:38:48.339889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.205 [2024-10-27 11:38:48.357168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.205 [2024-10-27 11:38:48.357192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:03.205 [2024-10-27 11:38:48.357201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.250 ms 00:24:03.205 [2024-10-27 11:38:48.357206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.206 [2024-10-27 11:38:48.373797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.206 [2024-10-27 11:38:48.373821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:03.206 [2024-10-27 11:38:48.373830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.534 ms 00:24:03.206 [2024-10-27 11:38:48.373835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.206 [2024-10-27 11:38:48.373864] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:03.206 [2024-10-27 11:38:48.373875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.373992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:03.206 [2024-10-27 11:38:48.374494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:03.207 [2024-10-27 11:38:48.374501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:03.207 [2024-10-27 11:38:48.374506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:03.207 [2024-10-27 11:38:48.374514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:03.207 [2024-10-27 11:38:48.374519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:03.207 [2024-10-27 11:38:48.374526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:03.207 [2024-10-27 11:38:48.374532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:03.207 [2024-10-27 11:38:48.374540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:03.207 [2024-10-27 11:38:48.374546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:03.207 [2024-10-27 11:38:48.374552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:03.207 [2024-10-27 11:38:48.374565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:03.207 [2024-10-27 11:38:48.374571] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: adc2a391-0658-4d35-80d0-3e1990da8c64 00:24:03.207 [2024-10-27 11:38:48.374582] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:03.207 [2024-10-27 11:38:48.374590] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:03.207 [2024-10-27 11:38:48.374596] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:03.207 [2024-10-27 11:38:48.374603] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:03.207 [2024-10-27 11:38:48.374608] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:03.207 [2024-10-27 11:38:48.374616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:03.207 [2024-10-27 11:38:48.374621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:03.207 [2024-10-27 11:38:48.374627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:03.207 [2024-10-27 11:38:48.374632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:03.207 [2024-10-27 11:38:48.374638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.207 [2024-10-27 11:38:48.374644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:03.207 [2024-10-27 11:38:48.374651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:24:03.207 [2024-10-27 11:38:48.374656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.207 [2024-10-27 11:38:48.384036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.207 [2024-10-27 11:38:48.384059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:03.207 [2024-10-27 11:38:48.384068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.333 ms 00:24:03.207 [2024-10-27 11:38:48.384075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.207 [2024-10-27 11:38:48.384352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.207 [2024-10-27 11:38:48.384362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:03.207 [2024-10-27 11:38:48.384370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:24:03.207 [2024-10-27 11:38:48.384375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.207 [2024-10-27 11:38:48.417141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.207 [2024-10-27 11:38:48.417167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:03.207 [2024-10-27 11:38:48.417176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.207 [2024-10-27 11:38:48.417184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.207 [2024-10-27 11:38:48.417228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.207 [2024-10-27 11:38:48.417234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:03.207 [2024-10-27 11:38:48.417245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.207 [2024-10-27 11:38:48.417251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.207 [2024-10-27 11:38:48.417315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.207 [2024-10-27 11:38:48.417323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:03.207 [2024-10-27 11:38:48.417331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.207 [2024-10-27 11:38:48.417337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.207 [2024-10-27 11:38:48.417358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.207 [2024-10-27 11:38:48.417366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:03.207 [2024-10-27 11:38:48.417373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.207 [2024-10-27 11:38:48.417379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.207 [2024-10-27 11:38:48.475989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.207 [2024-10-27 11:38:48.476023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:03.207 [2024-10-27 11:38:48.476033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.207 [2024-10-27 11:38:48.476041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.467 [2024-10-27 11:38:48.523598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.467 [2024-10-27 11:38:48.523733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:03.467 [2024-10-27 11:38:48.523747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.467 [2024-10-27 11:38:48.523754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.467 [2024-10-27 11:38:48.523829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.467 [2024-10-27 11:38:48.523836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:03.467 [2024-10-27 11:38:48.523843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.467 [2024-10-27 11:38:48.523849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.467 [2024-10-27 11:38:48.523888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.467 [2024-10-27 11:38:48.523895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:03.467 [2024-10-27 11:38:48.523902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.467 [2024-10-27 11:38:48.523908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.467 [2024-10-27 11:38:48.523979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.467 [2024-10-27 11:38:48.523987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:03.467 [2024-10-27 11:38:48.523994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.467 [2024-10-27 11:38:48.524000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.467 [2024-10-27 11:38:48.524026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.467 [2024-10-27 11:38:48.524034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:03.467 [2024-10-27 11:38:48.524041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.467 [2024-10-27 11:38:48.524047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.467 [2024-10-27 11:38:48.524075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.467 [2024-10-27 11:38:48.524081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:03.467 [2024-10-27 11:38:48.524089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.467 [2024-10-27 11:38:48.524094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.467 [2024-10-27 11:38:48.524131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.467 [2024-10-27 11:38:48.524138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:03.467 [2024-10-27 11:38:48.524146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.467 [2024-10-27 11:38:48.524151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.467 [2024-10-27 11:38:48.524251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 296.729 ms, result 0 00:24:03.467 true 00:24:03.467 11:38:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77454 00:24:03.467 11:38:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77454 00:24:03.467 11:38:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:03.467 [2024-10-27 11:38:48.615129] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:24:03.467 [2024-10-27 11:38:48.615368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78195 ] 00:24:03.727 [2024-10-27 11:38:48.770856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.727 [2024-10-27 11:38:48.849754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.113  [2024-10-27T11:38:51.346Z] Copying: 259/1024 [MB] (259 MBps) [2024-10-27T11:38:52.286Z] Copying: 523/1024 [MB] (263 MBps) [2024-10-27T11:38:53.228Z] Copying: 786/1024 [MB] (263 MBps) [2024-10-27T11:38:53.488Z] Copying: 1024/1024 [MB] (average 262 MBps) 00:24:08.207 00:24:08.467 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77454 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:08.467 11:38:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:08.467 [2024-10-27 11:38:53.557782] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:24:08.467 [2024-10-27 11:38:53.558086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78251 ] 00:24:08.467 [2024-10-27 11:38:53.713534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.729 [2024-10-27 11:38:53.791557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.729 [2024-10-27 11:38:53.996176] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:08.729 [2024-10-27 11:38:53.996372] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:08.991 [2024-10-27 11:38:54.058910] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:08.991 [2024-10-27 11:38:54.059288] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:08.991 [2024-10-27 11:38:54.059641] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:08.991 [2024-10-27 11:38:54.241271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.241313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:08.991 [2024-10-27 11:38:54.241323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:08.991 [2024-10-27 11:38:54.241329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.241368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.241375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:08.991 [2024-10-27 11:38:54.241382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:08.991 [2024-10-27 11:38:54.241387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.241400] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:08.991 [2024-10-27 11:38:54.241912] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:08.991 [2024-10-27 11:38:54.241929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.241934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:08.991 [2024-10-27 11:38:54.241940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:24:08.991 [2024-10-27 11:38:54.241946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.242850] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:08.991 [2024-10-27 11:38:54.252459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.252565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:08.991 [2024-10-27 11:38:54.252578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.610 ms 00:24:08.991 [2024-10-27 11:38:54.252584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.252714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.252722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:08.991 [2024-10-27 11:38:54.252729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:08.991 [2024-10-27 11:38:54.252734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.257054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.257077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:08.991 [2024-10-27 11:38:54.257084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.276 ms 00:24:08.991 [2024-10-27 11:38:54.257090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.257143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.257150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:08.991 [2024-10-27 11:38:54.257156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:08.991 [2024-10-27 11:38:54.257161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.257192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.257202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:08.991 [2024-10-27 11:38:54.257208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:08.991 [2024-10-27 11:38:54.257214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.257227] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.991 [2024-10-27 11:38:54.259907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.260001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:08.991 [2024-10-27 11:38:54.260012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:24:08.991 [2024-10-27 11:38:54.260019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.260048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.260055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:08.991 [2024-10-27 11:38:54.260061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:08.991 [2024-10-27 11:38:54.260066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.260082] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:08.991 [2024-10-27 11:38:54.260099] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:08.991 [2024-10-27 11:38:54.260124] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:08.991 [2024-10-27 11:38:54.260136] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:08.991 [2024-10-27 11:38:54.260213] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:08.991 [2024-10-27 11:38:54.260221] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:08.991 [2024-10-27 11:38:54.260229] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:08.991 [2024-10-27 11:38:54.260237] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:08.991 [2024-10-27 11:38:54.260245] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:08.991 [2024-10-27 11:38:54.260251] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:08.991 [2024-10-27 11:38:54.260256] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:08.991 [2024-10-27 11:38:54.260262] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:08.991 [2024-10-27 11:38:54.260268] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:08.991 [2024-10-27 11:38:54.260273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.260279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:08.991 [2024-10-27 11:38:54.260284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:24:08.991 [2024-10-27 11:38:54.260290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.260365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.991 [2024-10-27 11:38:54.260371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:08.991 [2024-10-27 11:38:54.260378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:08.991 [2024-10-27 11:38:54.260383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.991 [2024-10-27 11:38:54.260457] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:08.991 [2024-10-27 11:38:54.260464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:08.992 [2024-10-27 11:38:54.260471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.992 [2024-10-27 11:38:54.260476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:08.992 [2024-10-27 11:38:54.260487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:08.992 [2024-10-27 11:38:54.260497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:08.992 [2024-10-27 11:38:54.260503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.992 [2024-10-27 11:38:54.260513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:08.992 [2024-10-27 11:38:54.260522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:08.992 [2024-10-27 11:38:54.260527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.992 [2024-10-27 11:38:54.260532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:08.992 [2024-10-27 11:38:54.260538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:08.992 [2024-10-27 11:38:54.260543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:08.992 [2024-10-27 11:38:54.260553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:08.992 [2024-10-27 11:38:54.260557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:08.992 [2024-10-27 11:38:54.260567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.992 [2024-10-27 11:38:54.260577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:08.992 [2024-10-27 11:38:54.260582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.992 [2024-10-27 11:38:54.260592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:08.992 [2024-10-27 11:38:54.260597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.992 [2024-10-27 11:38:54.260607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:08.992 [2024-10-27 11:38:54.260612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.992 [2024-10-27 11:38:54.260621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:08.992 [2024-10-27 11:38:54.260626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.992 [2024-10-27 11:38:54.260635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:08.992 [2024-10-27 11:38:54.260640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:08.992 [2024-10-27 11:38:54.260645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.992 [2024-10-27 11:38:54.260650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:08.992 [2024-10-27 11:38:54.260655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:08.992 [2024-10-27 11:38:54.260660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:08.992 [2024-10-27 11:38:54.260669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:08.992 [2024-10-27 11:38:54.260674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260682] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:08.992 [2024-10-27 11:38:54.260688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:08.992 [2024-10-27 11:38:54.260693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.992 [2024-10-27 11:38:54.260700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.992 [2024-10-27 11:38:54.260706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:08.992 [2024-10-27 11:38:54.260711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:08.992 [2024-10-27 11:38:54.260716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:08.992 [2024-10-27 11:38:54.260720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:08.992 [2024-10-27 11:38:54.260725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:08.992 [2024-10-27 11:38:54.260730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:08.992 [2024-10-27 11:38:54.260737] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:08.992 [2024-10-27 11:38:54.260743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.992 [2024-10-27 11:38:54.260749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:08.992 [2024-10-27 11:38:54.260755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:08.992 [2024-10-27 11:38:54.260760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:08.992 [2024-10-27 11:38:54.260766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:08.992 [2024-10-27 11:38:54.260771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:08.992 [2024-10-27 11:38:54.260776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:08.992 [2024-10-27 11:38:54.260781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:08.992 [2024-10-27 11:38:54.260786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:08.992 [2024-10-27 11:38:54.260791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:08.992 [2024-10-27 11:38:54.260797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:08.992 [2024-10-27 11:38:54.260803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:08.992 [2024-10-27 11:38:54.260808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:08.992 [2024-10-27 11:38:54.260813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:08.992 [2024-10-27 11:38:54.260818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:08.992 [2024-10-27 11:38:54.260824] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:08.992 [2024-10-27 11:38:54.260830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.992 [2024-10-27 11:38:54.260836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:08.992 [2024-10-27 11:38:54.260841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:08.992 [2024-10-27 11:38:54.260846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:08.992 [2024-10-27 11:38:54.260851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:08.992 [2024-10-27 11:38:54.260858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.992 [2024-10-27 11:38:54.260864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:08.992 [2024-10-27 11:38:54.260869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:24:08.992 [2024-10-27 11:38:54.260875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.254 [2024-10-27 11:38:54.281664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.254 [2024-10-27 11:38:54.281693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:09.254 [2024-10-27 11:38:54.281702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.758 ms 00:24:09.254 [2024-10-27 11:38:54.281707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.254 [2024-10-27 11:38:54.281774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.254 [2024-10-27 11:38:54.281783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:09.254 [2024-10-27 11:38:54.281789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:09.254 [2024-10-27 11:38:54.281795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.254 [2024-10-27 11:38:54.334372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.254 [2024-10-27 11:38:54.334419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:09.254 [2024-10-27 11:38:54.334429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.532 ms 00:24:09.254 [2024-10-27 11:38:54.334438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.254 [2024-10-27 11:38:54.334483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.254 [2024-10-27 11:38:54.334490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:09.254 [2024-10-27 11:38:54.334497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:09.254 [2024-10-27 11:38:54.334503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.254 [2024-10-27 11:38:54.334839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.254 [2024-10-27 11:38:54.334852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:09.254 [2024-10-27 11:38:54.334860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:24:09.254 [2024-10-27 11:38:54.334866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.254 [2024-10-27 11:38:54.334969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.254 [2024-10-27 11:38:54.334975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:09.254 [2024-10-27 11:38:54.334982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:09.254 [2024-10-27 11:38:54.334987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.254 [2024-10-27 11:38:54.345418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.254 [2024-10-27 11:38:54.345547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:09.254 [2024-10-27 11:38:54.345560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.414 ms 00:24:09.254 [2024-10-27 11:38:54.345565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.254 [2024-10-27 11:38:54.355764] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:09.254 [2024-10-27 11:38:54.355793] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:09.254 [2024-10-27 11:38:54.355802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.254 [2024-10-27 11:38:54.355809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:09.254 [2024-10-27 11:38:54.355816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.152 ms 00:24:09.254 [2024-10-27 11:38:54.355822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.254 [2024-10-27 11:38:54.374395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.254 [2024-10-27 11:38:54.374430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:09.254 [2024-10-27 11:38:54.374448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.540 ms 00:24:09.254 [2024-10-27 11:38:54.374454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.254 [2024-10-27 11:38:54.383425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.254 [2024-10-27 11:38:54.383453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:09.255 [2024-10-27 11:38:54.383461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.926 ms 00:24:09.255 [2024-10-27 11:38:54.383467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.255 [2024-10-27 11:38:54.392201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.255 [2024-10-27 11:38:54.392223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:09.255 [2024-10-27 11:38:54.392230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.707 ms 00:24:09.255 [2024-10-27 11:38:54.392236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.255 [2024-10-27 11:38:54.392709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.255 [2024-10-27 11:38:54.392720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:09.255 [2024-10-27 11:38:54.392727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:24:09.255 [2024-10-27 11:38:54.392733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.255 [2024-10-27 11:38:54.436006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.255 [2024-10-27 11:38:54.436192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:09.255 [2024-10-27 11:38:54.436237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.259 ms 00:24:09.255 [2024-10-27 11:38:54.436256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.255 [2024-10-27 11:38:54.444397] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:09.255 [2024-10-27 11:38:54.446563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.255 [2024-10-27 11:38:54.446648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:09.255 [2024-10-27 11:38:54.446692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.249 ms 00:24:09.255 [2024-10-27 11:38:54.446711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.255 [2024-10-27 11:38:54.446808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.255 [2024-10-27 11:38:54.446830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:09.255 [2024-10-27 11:38:54.446846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:09.255 [2024-10-27 11:38:54.446883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.255 [2024-10-27 11:38:54.446953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.255 [2024-10-27 11:38:54.447014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:09.255 [2024-10-27 11:38:54.447032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:09.255 [2024-10-27 11:38:54.447047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.255 [2024-10-27 11:38:54.447075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.255 [2024-10-27 11:38:54.447094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:09.255 [2024-10-27 11:38:54.447175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:09.255 [2024-10-27 11:38:54.447193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.255 [2024-10-27 11:38:54.447227] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:09.255 [2024-10-27 11:38:54.447245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.255 [2024-10-27 11:38:54.447260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:09.255 [2024-10-27 11:38:54.447276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:09.255 [2024-10-27 11:38:54.447336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.255 [2024-10-27 11:38:54.467027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.255 [2024-10-27 11:38:54.467192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:09.255 [2024-10-27 11:38:54.467269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.662 ms 00:24:09.255 [2024-10-27 11:38:54.467319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.255 [2024-10-27 11:38:54.467419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.255 [2024-10-27 11:38:54.467466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:09.255 [2024-10-27 11:38:54.467540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:09.255 [2024-10-27 11:38:54.467570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.255 [2024-10-27 11:38:54.468663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.856 ms, result 0 00:24:10.641  [2024-10-27T11:38:56.495Z] Copying: 37/1024 [MB] (37 MBps) [2024-10-27T11:38:57.883Z] Copying: 49/1024 [MB] (11 MBps) [2024-10-27T11:38:58.828Z] Copying: 59/1024 [MB] (10 MBps) [2024-10-27T11:38:59.771Z] Copying: 73/1024 [MB] (13 MBps) [2024-10-27T11:39:00.728Z] Copying: 86/1024 [MB] (12 MBps) [2024-10-27T11:39:01.673Z] Copying: 101/1024 [MB] (15 MBps) [2024-10-27T11:39:02.616Z] Copying: 115/1024 [MB] (13 MBps) [2024-10-27T11:39:03.585Z] Copying: 141/1024 [MB] (26 MBps) [2024-10-27T11:39:04.527Z] Copying: 185/1024 [MB] (43 MBps) [2024-10-27T11:39:05.915Z] Copying: 207/1024 [MB] (21 MBps) [2024-10-27T11:39:06.488Z] Copying: 229/1024 [MB] (22 MBps) [2024-10-27T11:39:07.873Z] Copying: 240/1024 [MB] (11 MBps) [2024-10-27T11:39:08.815Z] Copying: 281/1024 [MB] (40 MBps) [2024-10-27T11:39:09.757Z] Copying: 308/1024 [MB] (26 MBps) [2024-10-27T11:39:10.700Z] Copying: 330/1024 [MB] (22 MBps) [2024-10-27T11:39:11.644Z] Copying: 353/1024 [MB] (22 MBps) [2024-10-27T11:39:12.641Z] Copying: 374/1024 [MB] (21 MBps) [2024-10-27T11:39:13.616Z] Copying: 396/1024 [MB] (21 MBps) [2024-10-27T11:39:14.559Z] Copying: 409/1024 [MB] (12 MBps) [2024-10-27T11:39:15.503Z] Copying: 426/1024 [MB] (16 MBps) [2024-10-27T11:39:16.894Z] Copying: 443/1024 [MB] (16 MBps) [2024-10-27T11:39:17.836Z] Copying: 460/1024 [MB] (16 MBps) [2024-10-27T11:39:18.779Z] Copying: 475/1024 [MB] (15 MBps) [2024-10-27T11:39:19.722Z] Copying: 490/1024 [MB] (14 MBps) [2024-10-27T11:39:20.665Z] Copying: 512/1024 [MB] (21 MBps) [2024-10-27T11:39:21.610Z] Copying: 527/1024 [MB] (14 MBps) [2024-10-27T11:39:22.553Z] Copying: 537/1024 [MB] (10 MBps) [2024-10-27T11:39:23.497Z] Copying: 547/1024 [MB] (10 MBps) [2024-10-27T11:39:24.882Z] Copying: 560/1024 [MB] (13 MBps) [2024-10-27T11:39:25.825Z] Copying: 613/1024 [MB] (53 MBps) [2024-10-27T11:39:26.768Z] Copying: 667/1024 [MB] (53 MBps) [2024-10-27T11:39:27.714Z] Copying: 701/1024 [MB] (33 MBps) [2024-10-27T11:39:28.656Z] Copying: 722/1024 [MB] (20 MBps) [2024-10-27T11:39:29.599Z] Copying: 739/1024 [MB] (17 MBps) [2024-10-27T11:39:30.541Z] Copying: 752/1024 [MB] (12 MBps) [2024-10-27T11:39:31.486Z] Copying: 769/1024 [MB] (17 MBps) [2024-10-27T11:39:32.871Z] Copying: 779/1024 [MB] (10 MBps) [2024-10-27T11:39:33.812Z] Copying: 790/1024 [MB] (10 MBps) [2024-10-27T11:39:34.756Z] Copying: 800/1024 [MB] (10 MBps) [2024-10-27T11:39:35.701Z] Copying: 818/1024 [MB] (17 MBps) [2024-10-27T11:39:36.646Z] Copying: 838/1024 [MB] (20 MBps) [2024-10-27T11:39:37.592Z] Copying: 859/1024 [MB] (20 MBps) [2024-10-27T11:39:38.534Z] Copying: 875/1024 [MB] (16 MBps) [2024-10-27T11:39:39.920Z] Copying: 892/1024 [MB] (16 MBps) [2024-10-27T11:39:40.492Z] Copying: 924/1024 [MB] (31 MBps) [2024-10-27T11:39:41.879Z] Copying: 956/1024 [MB] (32 MBps) [2024-10-27T11:39:42.823Z] Copying: 977/1024 [MB] (21 MBps) [2024-10-27T11:39:43.769Z] Copying: 995/1024 [MB] (18 MBps) [2024-10-27T11:39:44.775Z] Copying: 1012/1024 [MB] (16 MBps) [2024-10-27T11:39:45.383Z] Copying: 1023/1024 [MB] (10 MBps) [2024-10-27T11:39:45.383Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-10-27 11:39:45.333104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.102 [2024-10-27 11:39:45.333312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:00.102 [2024-10-27 11:39:45.333376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:00.102 [2024-10-27 11:39:45.333400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.102 [2024-10-27 11:39:45.334592] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:00.102 [2024-10-27 11:39:45.338395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.102 [2024-10-27 11:39:45.338494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:00.102 [2024-10-27 11:39:45.338513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.687 ms 00:25:00.102 [2024-10-27 11:39:45.338521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.102 [2024-10-27 11:39:45.351519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.102 [2024-10-27 11:39:45.351651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:00.102 [2024-10-27 11:39:45.351670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.176 ms 00:25:00.102 [2024-10-27 11:39:45.351678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.102 [2024-10-27 11:39:45.376155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.102 [2024-10-27 11:39:45.376191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:00.102 [2024-10-27 11:39:45.376202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.458 ms 00:25:00.102 [2024-10-27 11:39:45.376210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.362 [2024-10-27 11:39:45.382343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.362 [2024-10-27 11:39:45.382465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:00.362 [2024-10-27 11:39:45.382481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:25:00.362 [2024-10-27 11:39:45.382488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.362 [2024-10-27 11:39:45.406548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.362 [2024-10-27 11:39:45.406580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:00.362 [2024-10-27 11:39:45.406591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.021 ms 00:25:00.362 [2024-10-27 11:39:45.406598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.362 [2024-10-27 11:39:45.420283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.362 [2024-10-27 11:39:45.420327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:00.362 [2024-10-27 11:39:45.420338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.654 ms 00:25:00.362 [2024-10-27 11:39:45.420346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.362 [2024-10-27 11:39:45.580539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.362 [2024-10-27 11:39:45.580586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:00.362 [2024-10-27 11:39:45.580601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 160.158 ms 00:25:00.362 [2024-10-27 11:39:45.580609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.362 [2024-10-27 11:39:45.603832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.362 [2024-10-27 11:39:45.603868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:00.362 [2024-10-27 11:39:45.603879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.208 ms 00:25:00.362 [2024-10-27 11:39:45.603887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.362 [2024-10-27 11:39:45.627383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.362 [2024-10-27 11:39:45.627420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:00.363 [2024-10-27 11:39:45.627430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.460 ms 00:25:00.363 [2024-10-27 11:39:45.627438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.624 [2024-10-27 11:39:45.651342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.624 [2024-10-27 11:39:45.651382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:00.624 [2024-10-27 11:39:45.651393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.865 ms 00:25:00.624 [2024-10-27 11:39:45.651400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.624 [2024-10-27 11:39:45.675467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.624 [2024-10-27 11:39:45.675508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:00.624 [2024-10-27 11:39:45.675519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.982 ms 00:25:00.624 [2024-10-27 11:39:45.675527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.624 [2024-10-27 11:39:45.675567] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:00.624 [2024-10-27 11:39:45.675582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 111360 / 261120 wr_cnt: 1 state: open 00:25:00.624 [2024-10-27 11:39:45.675593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:00.624 [2024-10-27 11:39:45.675880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.675999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:00.625 [2024-10-27 11:39:45.676395] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:00.625 [2024-10-27 11:39:45.676403] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: adc2a391-0658-4d35-80d0-3e1990da8c64 00:25:00.625 [2024-10-27 11:39:45.676412] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 111360 00:25:00.625 [2024-10-27 11:39:45.676423] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 112320 00:25:00.625 [2024-10-27 11:39:45.676438] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 111360 00:25:00.625 [2024-10-27 11:39:45.676447] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:25:00.626 [2024-10-27 11:39:45.676455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:00.626 [2024-10-27 11:39:45.676463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:00.626 [2024-10-27 11:39:45.676471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:00.626 [2024-10-27 11:39:45.676477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:00.626 [2024-10-27 11:39:45.676484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:00.626 [2024-10-27 11:39:45.676491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.626 [2024-10-27 11:39:45.676500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:00.626 [2024-10-27 11:39:45.676509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:25:00.626 [2024-10-27 11:39:45.676516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.689698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.626 [2024-10-27 11:39:45.689858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:00.626 [2024-10-27 11:39:45.689875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.163 ms 00:25:00.626 [2024-10-27 11:39:45.689885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.690276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.626 [2024-10-27 11:39:45.690286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:00.626 [2024-10-27 11:39:45.690316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:25:00.626 [2024-10-27 11:39:45.690332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.726721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.726768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.626 [2024-10-27 11:39:45.726779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.726787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.726845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.726854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.626 [2024-10-27 11:39:45.726863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.726875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.726936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.726946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.626 [2024-10-27 11:39:45.726955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.726963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.726977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.726986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.626 [2024-10-27 11:39:45.726993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.727002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.809261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.809495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.626 [2024-10-27 11:39:45.809516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.809525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.876725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.876918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:00.626 [2024-10-27 11:39:45.876936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.876945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.877037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.877048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:00.626 [2024-10-27 11:39:45.877057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.877066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.877106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.877116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:00.626 [2024-10-27 11:39:45.877125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.877133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.877235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.877250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:00.626 [2024-10-27 11:39:45.877259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.877268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.877333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.877345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:00.626 [2024-10-27 11:39:45.877354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.877362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.877403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.877415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:00.626 [2024-10-27 11:39:45.877425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.877434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.877483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.626 [2024-10-27 11:39:45.877494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:00.626 [2024-10-27 11:39:45.877502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.626 [2024-10-27 11:39:45.877510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.626 [2024-10-27 11:39:45.877651] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.018 ms, result 0 00:25:02.535 00:25:02.535 00:25:02.535 11:39:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:04.447 11:39:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:04.447 [2024-10-27 11:39:49.615611] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:25:04.447 [2024-10-27 11:39:49.615913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78820 ] 00:25:04.707 [2024-10-27 11:39:49.779894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.707 [2024-10-27 11:39:49.895244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.968 [2024-10-27 11:39:50.187337] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:04.968 [2024-10-27 11:39:50.187405] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:05.229 [2024-10-27 11:39:50.348239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.229 [2024-10-27 11:39:50.348317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:05.229 [2024-10-27 11:39:50.348335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:05.229 [2024-10-27 11:39:50.348345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.229 [2024-10-27 11:39:50.348400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.229 [2024-10-27 11:39:50.348411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:05.229 [2024-10-27 11:39:50.348423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:05.229 [2024-10-27 11:39:50.348432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.229 [2024-10-27 11:39:50.348454] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:05.229 [2024-10-27 11:39:50.349363] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:05.229 [2024-10-27 11:39:50.349539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.229 [2024-10-27 11:39:50.349554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:05.229 [2024-10-27 11:39:50.349564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:25:05.229 [2024-10-27 11:39:50.349573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.229 [2024-10-27 11:39:50.351393] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:05.229 [2024-10-27 11:39:50.365414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.229 [2024-10-27 11:39:50.365460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:05.229 [2024-10-27 11:39:50.365473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.023 ms 00:25:05.229 [2024-10-27 11:39:50.365482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.229 [2024-10-27 11:39:50.365556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.229 [2024-10-27 11:39:50.365568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:05.229 [2024-10-27 11:39:50.365578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:05.229 [2024-10-27 11:39:50.365586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.229 [2024-10-27 11:39:50.373457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.229 [2024-10-27 11:39:50.373495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:05.229 [2024-10-27 11:39:50.373505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.796 ms 00:25:05.229 [2024-10-27 11:39:50.373514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.229 [2024-10-27 11:39:50.373597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.229 [2024-10-27 11:39:50.373607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:05.230 [2024-10-27 11:39:50.373616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:05.230 [2024-10-27 11:39:50.373624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.230 [2024-10-27 11:39:50.373667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.230 [2024-10-27 11:39:50.373678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:05.230 [2024-10-27 11:39:50.373686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:05.230 [2024-10-27 11:39:50.373694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.230 [2024-10-27 11:39:50.373718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:05.230 [2024-10-27 11:39:50.377757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.230 [2024-10-27 11:39:50.377795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:05.230 [2024-10-27 11:39:50.377806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.045 ms 00:25:05.230 [2024-10-27 11:39:50.377818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.230 [2024-10-27 11:39:50.377852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.230 [2024-10-27 11:39:50.377861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:05.230 [2024-10-27 11:39:50.377869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:05.230 [2024-10-27 11:39:50.377877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.230 [2024-10-27 11:39:50.377927] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:05.230 [2024-10-27 11:39:50.377950] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:05.230 [2024-10-27 11:39:50.377987] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:05.230 [2024-10-27 11:39:50.378006] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:05.230 [2024-10-27 11:39:50.378111] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:05.230 [2024-10-27 11:39:50.378123] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:05.230 [2024-10-27 11:39:50.378134] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:05.230 [2024-10-27 11:39:50.378145] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:05.230 [2024-10-27 11:39:50.378155] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:05.230 [2024-10-27 11:39:50.378163] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:05.230 [2024-10-27 11:39:50.378172] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:05.230 [2024-10-27 11:39:50.378180] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:05.230 [2024-10-27 11:39:50.378188] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:05.230 [2024-10-27 11:39:50.378199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.230 [2024-10-27 11:39:50.378207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:05.230 [2024-10-27 11:39:50.378215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:25:05.230 [2024-10-27 11:39:50.378222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.230 [2024-10-27 11:39:50.378328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.230 [2024-10-27 11:39:50.378338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:05.230 [2024-10-27 11:39:50.378346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:25:05.230 [2024-10-27 11:39:50.378354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.230 [2024-10-27 11:39:50.378458] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:05.230 [2024-10-27 11:39:50.378474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:05.230 [2024-10-27 11:39:50.378482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:05.230 [2024-10-27 11:39:50.378490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:05.230 [2024-10-27 11:39:50.378505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:05.230 [2024-10-27 11:39:50.378520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:05.230 [2024-10-27 11:39:50.378527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:05.230 [2024-10-27 11:39:50.378541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:05.230 [2024-10-27 11:39:50.378548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:05.230 [2024-10-27 11:39:50.378555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:05.230 [2024-10-27 11:39:50.378562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:05.230 [2024-10-27 11:39:50.378569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:05.230 [2024-10-27 11:39:50.378582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:05.230 [2024-10-27 11:39:50.378597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:05.230 [2024-10-27 11:39:50.378604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:05.230 [2024-10-27 11:39:50.378618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:05.230 [2024-10-27 11:39:50.378631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:05.230 [2024-10-27 11:39:50.378638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:05.230 [2024-10-27 11:39:50.378650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:05.230 [2024-10-27 11:39:50.378657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:05.230 [2024-10-27 11:39:50.378670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:05.230 [2024-10-27 11:39:50.378677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:05.230 [2024-10-27 11:39:50.378690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:05.230 [2024-10-27 11:39:50.378697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:05.230 [2024-10-27 11:39:50.378709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:05.230 [2024-10-27 11:39:50.378715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:05.230 [2024-10-27 11:39:50.378721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:05.230 [2024-10-27 11:39:50.378727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:05.230 [2024-10-27 11:39:50.378734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:05.230 [2024-10-27 11:39:50.378740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:05.230 [2024-10-27 11:39:50.378753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:05.230 [2024-10-27 11:39:50.378759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378765] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:05.230 [2024-10-27 11:39:50.378774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:05.230 [2024-10-27 11:39:50.378782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:05.230 [2024-10-27 11:39:50.378790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:05.230 [2024-10-27 11:39:50.378797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:05.230 [2024-10-27 11:39:50.378807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:05.230 [2024-10-27 11:39:50.378814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:05.231 [2024-10-27 11:39:50.378821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:05.231 [2024-10-27 11:39:50.378828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:05.231 [2024-10-27 11:39:50.378835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:05.231 [2024-10-27 11:39:50.378843] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:05.231 [2024-10-27 11:39:50.378853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:05.231 [2024-10-27 11:39:50.378862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:05.231 [2024-10-27 11:39:50.378869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:05.231 [2024-10-27 11:39:50.378877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:05.231 [2024-10-27 11:39:50.378884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:05.231 [2024-10-27 11:39:50.378891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:05.231 [2024-10-27 11:39:50.378899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:05.231 [2024-10-27 11:39:50.378906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:05.231 [2024-10-27 11:39:50.378913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:05.231 [2024-10-27 11:39:50.378920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:05.231 [2024-10-27 11:39:50.378928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:05.231 [2024-10-27 11:39:50.378935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:05.231 [2024-10-27 11:39:50.378941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:05.231 [2024-10-27 11:39:50.378949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:05.231 [2024-10-27 11:39:50.378957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:05.231 [2024-10-27 11:39:50.378964] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:05.231 [2024-10-27 11:39:50.378972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:05.231 [2024-10-27 11:39:50.378982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:05.231 [2024-10-27 11:39:50.378989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:05.231 [2024-10-27 11:39:50.378997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:05.231 [2024-10-27 11:39:50.379004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:05.231 [2024-10-27 11:39:50.379011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.231 [2024-10-27 11:39:50.379019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:05.231 [2024-10-27 11:39:50.379027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:25:05.231 [2024-10-27 11:39:50.379035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.231 [2024-10-27 11:39:50.410283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.231 [2024-10-27 11:39:50.410344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:05.231 [2024-10-27 11:39:50.410355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.205 ms 00:25:05.231 [2024-10-27 11:39:50.410363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.231 [2024-10-27 11:39:50.410452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.231 [2024-10-27 11:39:50.410466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:05.231 [2024-10-27 11:39:50.410475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:05.231 [2024-10-27 11:39:50.410483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.231 [2024-10-27 11:39:50.452689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.231 [2024-10-27 11:39:50.452742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:05.231 [2024-10-27 11:39:50.452756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.150 ms 00:25:05.231 [2024-10-27 11:39:50.452766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.231 [2024-10-27 11:39:50.452813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.231 [2024-10-27 11:39:50.452823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:05.231 [2024-10-27 11:39:50.452832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:05.231 [2024-10-27 11:39:50.452844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.231 [2024-10-27 11:39:50.453462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.231 [2024-10-27 11:39:50.453485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:05.231 [2024-10-27 11:39:50.453495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:25:05.231 [2024-10-27 11:39:50.453503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.231 [2024-10-27 11:39:50.453658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.231 [2024-10-27 11:39:50.453669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:05.231 [2024-10-27 11:39:50.453678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:25:05.231 [2024-10-27 11:39:50.453687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.231 [2024-10-27 11:39:50.469130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.231 [2024-10-27 11:39:50.469176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:05.231 [2024-10-27 11:39:50.469187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.417 ms 00:25:05.231 [2024-10-27 11:39:50.469198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.231 [2024-10-27 11:39:50.483394] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:05.231 [2024-10-27 11:39:50.483580] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:05.231 [2024-10-27 11:39:50.483600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.231 [2024-10-27 11:39:50.483609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:05.231 [2024-10-27 11:39:50.483619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.259 ms 00:25:05.231 [2024-10-27 11:39:50.483626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.509182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.509249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:05.493 [2024-10-27 11:39:50.509261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.445 ms 00:25:05.493 [2024-10-27 11:39:50.509269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.521937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.521989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:05.493 [2024-10-27 11:39:50.522001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.598 ms 00:25:05.493 [2024-10-27 11:39:50.522009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.534335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.534504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:05.493 [2024-10-27 11:39:50.534524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.282 ms 00:25:05.493 [2024-10-27 11:39:50.534531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.535163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.535189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:05.493 [2024-10-27 11:39:50.535199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:25:05.493 [2024-10-27 11:39:50.535207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.598025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.598285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:05.493 [2024-10-27 11:39:50.598333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.796 ms 00:25:05.493 [2024-10-27 11:39:50.598349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.609279] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:05.493 [2024-10-27 11:39:50.612142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.612183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:05.493 [2024-10-27 11:39:50.612195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.673 ms 00:25:05.493 [2024-10-27 11:39:50.612204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.612287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.612324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:05.493 [2024-10-27 11:39:50.612334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:05.493 [2024-10-27 11:39:50.612342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.614127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.614172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:05.493 [2024-10-27 11:39:50.614182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:25:05.493 [2024-10-27 11:39:50.614191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.614226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.614235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:05.493 [2024-10-27 11:39:50.614244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:05.493 [2024-10-27 11:39:50.614252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.614291] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:05.493 [2024-10-27 11:39:50.614326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.614335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:05.493 [2024-10-27 11:39:50.614343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:05.493 [2024-10-27 11:39:50.614352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.640235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.640280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:05.493 [2024-10-27 11:39:50.640309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.865 ms 00:25:05.493 [2024-10-27 11:39:50.640319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.640410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.493 [2024-10-27 11:39:50.640421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:05.493 [2024-10-27 11:39:50.640431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:05.493 [2024-10-27 11:39:50.640439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.493 [2024-10-27 11:39:50.641884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 293.050 ms, result 0 00:25:06.879  [2024-10-27T11:39:53.100Z] Copying: 1020/1048576 [kB] (1020 kBps) [2024-10-27T11:39:54.038Z] Copying: 5268/1048576 [kB] (4248 kBps) [2024-10-27T11:39:54.978Z] Copying: 34/1024 [MB] (29 MBps) [2024-10-27T11:39:55.917Z] Copying: 78/1024 [MB] (43 MBps) [2024-10-27T11:39:56.857Z] Copying: 105/1024 [MB] (27 MBps) [2024-10-27T11:39:58.239Z] Copying: 137/1024 [MB] (31 MBps) [2024-10-27T11:39:59.180Z] Copying: 171/1024 [MB] (33 MBps) [2024-10-27T11:40:00.119Z] Copying: 200/1024 [MB] (29 MBps) [2024-10-27T11:40:01.062Z] Copying: 235/1024 [MB] (35 MBps) [2024-10-27T11:40:02.003Z] Copying: 266/1024 [MB] (30 MBps) [2024-10-27T11:40:02.944Z] Copying: 289/1024 [MB] (23 MBps) [2024-10-27T11:40:03.888Z] Copying: 322/1024 [MB] (33 MBps) [2024-10-27T11:40:04.831Z] Copying: 353/1024 [MB] (30 MBps) [2024-10-27T11:40:06.215Z] Copying: 384/1024 [MB] (31 MBps) [2024-10-27T11:40:07.157Z] Copying: 411/1024 [MB] (26 MBps) [2024-10-27T11:40:08.097Z] Copying: 441/1024 [MB] (29 MBps) [2024-10-27T11:40:09.042Z] Copying: 473/1024 [MB] (31 MBps) [2024-10-27T11:40:09.984Z] Copying: 503/1024 [MB] (30 MBps) [2024-10-27T11:40:10.926Z] Copying: 534/1024 [MB] (30 MBps) [2024-10-27T11:40:11.870Z] Copying: 555/1024 [MB] (20 MBps) [2024-10-27T11:40:13.258Z] Copying: 584/1024 [MB] (29 MBps) [2024-10-27T11:40:13.832Z] Copying: 616/1024 [MB] (31 MBps) [2024-10-27T11:40:15.220Z] Copying: 643/1024 [MB] (27 MBps) [2024-10-27T11:40:16.165Z] Copying: 672/1024 [MB] (28 MBps) [2024-10-27T11:40:17.174Z] Copying: 702/1024 [MB] (30 MBps) [2024-10-27T11:40:18.117Z] Copying: 732/1024 [MB] (29 MBps) [2024-10-27T11:40:19.058Z] Copying: 763/1024 [MB] (30 MBps) [2024-10-27T11:40:19.999Z] Copying: 788/1024 [MB] (25 MBps) [2024-10-27T11:40:20.940Z] Copying: 819/1024 [MB] (31 MBps) [2024-10-27T11:40:21.881Z] Copying: 849/1024 [MB] (29 MBps) [2024-10-27T11:40:23.265Z] Copying: 878/1024 [MB] (29 MBps) [2024-10-27T11:40:23.838Z] Copying: 904/1024 [MB] (26 MBps) [2024-10-27T11:40:25.223Z] Copying: 934/1024 [MB] (29 MBps) [2024-10-27T11:40:26.166Z] Copying: 959/1024 [MB] (25 MBps) [2024-10-27T11:40:27.110Z] Copying: 976/1024 [MB] (16 MBps) [2024-10-27T11:40:28.051Z] Copying: 992/1024 [MB] (16 MBps) [2024-10-27T11:40:28.621Z] Copying: 1008/1024 [MB] (16 MBps) [2024-10-27T11:40:28.885Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-27 11:40:28.649251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.649366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:43.604 [2024-10-27 11:40:28.649405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:43.604 [2024-10-27 11:40:28.649419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.649453] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:43.604 [2024-10-27 11:40:28.652734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.652972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:43.604 [2024-10-27 11:40:28.652996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:25:43.604 [2024-10-27 11:40:28.653005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.653357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.653371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:43.604 [2024-10-27 11:40:28.653381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:25:43.604 [2024-10-27 11:40:28.653394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.666308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.666373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:43.604 [2024-10-27 11:40:28.666386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.895 ms 00:25:43.604 [2024-10-27 11:40:28.666394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.672788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.672826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:43.604 [2024-10-27 11:40:28.672838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.358 ms 00:25:43.604 [2024-10-27 11:40:28.672854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.699257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.699312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:43.604 [2024-10-27 11:40:28.699325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.345 ms 00:25:43.604 [2024-10-27 11:40:28.699334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.714841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.715028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:43.604 [2024-10-27 11:40:28.715050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.463 ms 00:25:43.604 [2024-10-27 11:40:28.715060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.719856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.719904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:43.604 [2024-10-27 11:40:28.719915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.683 ms 00:25:43.604 [2024-10-27 11:40:28.719924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.745276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.745324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:43.604 [2024-10-27 11:40:28.745336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.335 ms 00:25:43.604 [2024-10-27 11:40:28.745343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.770357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.770411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:43.604 [2024-10-27 11:40:28.770434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.972 ms 00:25:43.604 [2024-10-27 11:40:28.770441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.795158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.795198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:43.604 [2024-10-27 11:40:28.795210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.674 ms 00:25:43.604 [2024-10-27 11:40:28.795218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.820134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.604 [2024-10-27 11:40:28.820175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:43.604 [2024-10-27 11:40:28.820187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.846 ms 00:25:43.604 [2024-10-27 11:40:28.820194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.604 [2024-10-27 11:40:28.820236] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:43.604 [2024-10-27 11:40:28.820251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:43.604 [2024-10-27 11:40:28.820263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:43.604 [2024-10-27 11:40:28.820271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:43.604 [2024-10-27 11:40:28.820419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.820994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.821001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:43.605 [2024-10-27 11:40:28.821008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:43.606 [2024-10-27 11:40:28.821016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:43.606 [2024-10-27 11:40:28.821023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:43.606 [2024-10-27 11:40:28.821030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:43.606 [2024-10-27 11:40:28.821039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:43.606 [2024-10-27 11:40:28.821046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:43.606 [2024-10-27 11:40:28.821054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:43.606 [2024-10-27 11:40:28.821062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:43.606 [2024-10-27 11:40:28.821081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:43.606 [2024-10-27 11:40:28.821089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:43.606 [2024-10-27 11:40:28.821105] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:43.606 [2024-10-27 11:40:28.821114] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: adc2a391-0658-4d35-80d0-3e1990da8c64 00:25:43.606 [2024-10-27 11:40:28.821123] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:43.606 [2024-10-27 11:40:28.821131] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 153280 00:25:43.606 [2024-10-27 11:40:28.821138] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 151296 00:25:43.606 [2024-10-27 11:40:28.821147] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0131 00:25:43.606 [2024-10-27 11:40:28.821158] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:43.606 [2024-10-27 11:40:28.821166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:43.606 [2024-10-27 11:40:28.821174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:43.606 [2024-10-27 11:40:28.821189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:43.606 [2024-10-27 11:40:28.821196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:43.606 [2024-10-27 11:40:28.821203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.606 [2024-10-27 11:40:28.821211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:43.606 [2024-10-27 11:40:28.821220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:25:43.606 [2024-10-27 11:40:28.821228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.606 [2024-10-27 11:40:28.834794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.606 [2024-10-27 11:40:28.834831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:43.606 [2024-10-27 11:40:28.834850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.546 ms 00:25:43.606 [2024-10-27 11:40:28.834858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.606 [2024-10-27 11:40:28.835248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:43.606 [2024-10-27 11:40:28.835258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:43.606 [2024-10-27 11:40:28.835267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:25:43.606 [2024-10-27 11:40:28.835275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.606 [2024-10-27 11:40:28.871527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.606 [2024-10-27 11:40:28.871583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:43.606 [2024-10-27 11:40:28.871594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.606 [2024-10-27 11:40:28.871603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.606 [2024-10-27 11:40:28.871659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.606 [2024-10-27 11:40:28.871668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:43.606 [2024-10-27 11:40:28.871677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.606 [2024-10-27 11:40:28.871685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.606 [2024-10-27 11:40:28.871763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.606 [2024-10-27 11:40:28.871778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:43.606 [2024-10-27 11:40:28.871788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.606 [2024-10-27 11:40:28.871796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.606 [2024-10-27 11:40:28.871812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.606 [2024-10-27 11:40:28.871820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:43.606 [2024-10-27 11:40:28.871828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.606 [2024-10-27 11:40:28.871837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.867 [2024-10-27 11:40:28.956610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.867 [2024-10-27 11:40:28.956669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:43.867 [2024-10-27 11:40:28.956681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.867 [2024-10-27 11:40:28.956690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.867 [2024-10-27 11:40:29.025688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.867 [2024-10-27 11:40:29.025744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:43.867 [2024-10-27 11:40:29.025757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.867 [2024-10-27 11:40:29.025767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.867 [2024-10-27 11:40:29.025823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.867 [2024-10-27 11:40:29.025833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:43.867 [2024-10-27 11:40:29.025842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.867 [2024-10-27 11:40:29.025856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.867 [2024-10-27 11:40:29.025911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.867 [2024-10-27 11:40:29.025921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:43.867 [2024-10-27 11:40:29.025930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.867 [2024-10-27 11:40:29.025938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.867 [2024-10-27 11:40:29.026039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.867 [2024-10-27 11:40:29.026050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:43.867 [2024-10-27 11:40:29.026058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.867 [2024-10-27 11:40:29.026070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.867 [2024-10-27 11:40:29.026102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.867 [2024-10-27 11:40:29.026112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:43.867 [2024-10-27 11:40:29.026121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.867 [2024-10-27 11:40:29.026128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.867 [2024-10-27 11:40:29.026168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.868 [2024-10-27 11:40:29.026178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:43.868 [2024-10-27 11:40:29.026186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.868 [2024-10-27 11:40:29.026193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.868 [2024-10-27 11:40:29.026242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:43.868 [2024-10-27 11:40:29.026252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:43.868 [2024-10-27 11:40:29.026260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:43.868 [2024-10-27 11:40:29.026268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:43.868 [2024-10-27 11:40:29.026424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.149 ms, result 0 00:25:44.812 00:25:44.812 00:25:44.812 11:40:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:46.728 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:46.728 11:40:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:46.728 [2024-10-27 11:40:31.928905] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:25:46.728 [2024-10-27 11:40:31.929022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79257 ] 00:25:46.990 [2024-10-27 11:40:32.091195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.990 [2024-10-27 11:40:32.208746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.250 [2024-10-27 11:40:32.498575] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:47.250 [2024-10-27 11:40:32.498662] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:47.512 [2024-10-27 11:40:32.660454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.660518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:47.512 [2024-10-27 11:40:32.660536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:47.512 [2024-10-27 11:40:32.660545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.660603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.660614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:47.512 [2024-10-27 11:40:32.660625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:47.512 [2024-10-27 11:40:32.660633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.660653] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:47.512 [2024-10-27 11:40:32.661425] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:47.512 [2024-10-27 11:40:32.661451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.661459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:47.512 [2024-10-27 11:40:32.661469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:25:47.512 [2024-10-27 11:40:32.661477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.663271] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:47.512 [2024-10-27 11:40:32.677658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.677709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:47.512 [2024-10-27 11:40:32.677723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.390 ms 00:25:47.512 [2024-10-27 11:40:32.677731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.677812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.677826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:47.512 [2024-10-27 11:40:32.677837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:47.512 [2024-10-27 11:40:32.677845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.687459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.687506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:47.512 [2024-10-27 11:40:32.687517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.531 ms 00:25:47.512 [2024-10-27 11:40:32.687525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.687615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.687625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:47.512 [2024-10-27 11:40:32.687636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:47.512 [2024-10-27 11:40:32.687644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.687689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.687699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:47.512 [2024-10-27 11:40:32.687708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:47.512 [2024-10-27 11:40:32.687716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.687740] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:47.512 [2024-10-27 11:40:32.691866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.691904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:47.512 [2024-10-27 11:40:32.691914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.131 ms 00:25:47.512 [2024-10-27 11:40:32.691926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.691961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.691970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:47.512 [2024-10-27 11:40:32.691978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:47.512 [2024-10-27 11:40:32.691985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.692037] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:47.512 [2024-10-27 11:40:32.692060] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:47.512 [2024-10-27 11:40:32.692098] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:47.512 [2024-10-27 11:40:32.692118] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:47.512 [2024-10-27 11:40:32.692225] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:47.512 [2024-10-27 11:40:32.692237] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:47.512 [2024-10-27 11:40:32.692248] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:47.512 [2024-10-27 11:40:32.692258] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:47.512 [2024-10-27 11:40:32.692269] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:47.512 [2024-10-27 11:40:32.692278] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:47.512 [2024-10-27 11:40:32.692286] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:47.512 [2024-10-27 11:40:32.692313] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:47.512 [2024-10-27 11:40:32.692321] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:47.512 [2024-10-27 11:40:32.692333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.692342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:47.512 [2024-10-27 11:40:32.692353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:25:47.512 [2024-10-27 11:40:32.692360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.692444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.512 [2024-10-27 11:40:32.692452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:47.512 [2024-10-27 11:40:32.692461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:47.512 [2024-10-27 11:40:32.692467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.512 [2024-10-27 11:40:32.692570] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:47.512 [2024-10-27 11:40:32.692592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:47.512 [2024-10-27 11:40:32.692601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.512 [2024-10-27 11:40:32.692609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.512 [2024-10-27 11:40:32.692618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:47.512 [2024-10-27 11:40:32.692625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:47.512 [2024-10-27 11:40:32.692632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:47.512 [2024-10-27 11:40:32.692639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:47.512 [2024-10-27 11:40:32.692646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:47.512 [2024-10-27 11:40:32.692654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.512 [2024-10-27 11:40:32.692660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:47.512 [2024-10-27 11:40:32.692667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:47.512 [2024-10-27 11:40:32.692673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.512 [2024-10-27 11:40:32.692680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:47.512 [2024-10-27 11:40:32.692687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:47.512 [2024-10-27 11:40:32.692700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.512 [2024-10-27 11:40:32.692707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:47.512 [2024-10-27 11:40:32.692714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:47.512 [2024-10-27 11:40:32.692721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.512 [2024-10-27 11:40:32.692729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:47.512 [2024-10-27 11:40:32.692736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:47.512 [2024-10-27 11:40:32.692743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.512 [2024-10-27 11:40:32.692750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:47.512 [2024-10-27 11:40:32.692756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:47.513 [2024-10-27 11:40:32.692763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.513 [2024-10-27 11:40:32.692770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:47.513 [2024-10-27 11:40:32.692776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:47.513 [2024-10-27 11:40:32.692782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.513 [2024-10-27 11:40:32.692789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:47.513 [2024-10-27 11:40:32.692796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:47.513 [2024-10-27 11:40:32.692804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.513 [2024-10-27 11:40:32.692811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:47.513 [2024-10-27 11:40:32.692817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:47.513 [2024-10-27 11:40:32.692823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.513 [2024-10-27 11:40:32.692830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:47.513 [2024-10-27 11:40:32.692836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:47.513 [2024-10-27 11:40:32.692843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.513 [2024-10-27 11:40:32.692850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:47.513 [2024-10-27 11:40:32.692858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:47.513 [2024-10-27 11:40:32.692867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.513 [2024-10-27 11:40:32.692874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:47.513 [2024-10-27 11:40:32.692881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:47.513 [2024-10-27 11:40:32.692891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.513 [2024-10-27 11:40:32.692898] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:47.513 [2024-10-27 11:40:32.692906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:47.513 [2024-10-27 11:40:32.692913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.513 [2024-10-27 11:40:32.692921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.513 [2024-10-27 11:40:32.692930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:47.513 [2024-10-27 11:40:32.692937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:47.513 [2024-10-27 11:40:32.692944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:47.513 [2024-10-27 11:40:32.692951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:47.513 [2024-10-27 11:40:32.692959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:47.513 [2024-10-27 11:40:32.692966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:47.513 [2024-10-27 11:40:32.692975] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:47.513 [2024-10-27 11:40:32.692985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.513 [2024-10-27 11:40:32.692994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:47.513 [2024-10-27 11:40:32.693001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:47.513 [2024-10-27 11:40:32.693009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:47.513 [2024-10-27 11:40:32.693016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:47.513 [2024-10-27 11:40:32.693023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:47.513 [2024-10-27 11:40:32.693030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:47.513 [2024-10-27 11:40:32.693037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:47.513 [2024-10-27 11:40:32.693045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:47.513 [2024-10-27 11:40:32.693052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:47.513 [2024-10-27 11:40:32.693060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:47.513 [2024-10-27 11:40:32.693067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:47.513 [2024-10-27 11:40:32.693088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:47.513 [2024-10-27 11:40:32.693095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:47.513 [2024-10-27 11:40:32.693102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:47.513 [2024-10-27 11:40:32.693113] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:47.513 [2024-10-27 11:40:32.693121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.513 [2024-10-27 11:40:32.693135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:47.513 [2024-10-27 11:40:32.693143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:47.513 [2024-10-27 11:40:32.693151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:47.513 [2024-10-27 11:40:32.693159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:47.513 [2024-10-27 11:40:32.693168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.513 [2024-10-27 11:40:32.693177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:47.513 [2024-10-27 11:40:32.693186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:25:47.513 [2024-10-27 11:40:32.693194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.513 [2024-10-27 11:40:32.725725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.513 [2024-10-27 11:40:32.725782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:47.513 [2024-10-27 11:40:32.725795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.484 ms 00:25:47.513 [2024-10-27 11:40:32.725803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.513 [2024-10-27 11:40:32.725898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.513 [2024-10-27 11:40:32.725913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:47.513 [2024-10-27 11:40:32.725922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:47.513 [2024-10-27 11:40:32.725930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.513 [2024-10-27 11:40:32.779352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.513 [2024-10-27 11:40:32.779405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:47.513 [2024-10-27 11:40:32.779418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.362 ms 00:25:47.513 [2024-10-27 11:40:32.779428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.513 [2024-10-27 11:40:32.779479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.513 [2024-10-27 11:40:32.779490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:47.513 [2024-10-27 11:40:32.779499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:47.513 [2024-10-27 11:40:32.779511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.513 [2024-10-27 11:40:32.780125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.513 [2024-10-27 11:40:32.780158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:47.513 [2024-10-27 11:40:32.780169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:25:47.513 [2024-10-27 11:40:32.780177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.513 [2024-10-27 11:40:32.780350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.513 [2024-10-27 11:40:32.780361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:47.513 [2024-10-27 11:40:32.780370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:25:47.513 [2024-10-27 11:40:32.780378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.796727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.796771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:47.774 [2024-10-27 11:40:32.796782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.322 ms 00:25:47.774 [2024-10-27 11:40:32.796793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.810928] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:47.774 [2024-10-27 11:40:32.810979] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:47.774 [2024-10-27 11:40:32.810993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.811002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:47.774 [2024-10-27 11:40:32.811012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.083 ms 00:25:47.774 [2024-10-27 11:40:32.811020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.837125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.837188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:47.774 [2024-10-27 11:40:32.837201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.049 ms 00:25:47.774 [2024-10-27 11:40:32.837210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.850398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.850446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:47.774 [2024-10-27 11:40:32.850459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.127 ms 00:25:47.774 [2024-10-27 11:40:32.850466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.863321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.863365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:47.774 [2024-10-27 11:40:32.863377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.806 ms 00:25:47.774 [2024-10-27 11:40:32.863385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.864034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.864062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:47.774 [2024-10-27 11:40:32.864073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:25:47.774 [2024-10-27 11:40:32.864081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.928988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.929251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:47.774 [2024-10-27 11:40:32.929277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.883 ms 00:25:47.774 [2024-10-27 11:40:32.929323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.940765] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:47.774 [2024-10-27 11:40:32.943737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.943905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:47.774 [2024-10-27 11:40:32.943925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.337 ms 00:25:47.774 [2024-10-27 11:40:32.943934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.944026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.944039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:47.774 [2024-10-27 11:40:32.944048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:47.774 [2024-10-27 11:40:32.944056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.944905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.944955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:47.774 [2024-10-27 11:40:32.944968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:25:47.774 [2024-10-27 11:40:32.944978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.945009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.945018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:47.774 [2024-10-27 11:40:32.945028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:47.774 [2024-10-27 11:40:32.945037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.945096] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:47.774 [2024-10-27 11:40:32.945111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.945121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:47.774 [2024-10-27 11:40:32.945131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:47.774 [2024-10-27 11:40:32.945140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.970534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.774 [2024-10-27 11:40:32.970721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:47.774 [2024-10-27 11:40:32.970745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.372 ms 00:25:47.774 [2024-10-27 11:40:32.970754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.774 [2024-10-27 11:40:32.970845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.775 [2024-10-27 11:40:32.970856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:47.775 [2024-10-27 11:40:32.970865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:47.775 [2024-10-27 11:40:32.970873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.775 [2024-10-27 11:40:32.972222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 311.287 ms, result 0 00:25:49.156  [2024-10-27T11:40:35.378Z] Copying: 13/1024 [MB] (13 MBps) [2024-10-27T11:40:36.320Z] Copying: 34/1024 [MB] (20 MBps) [2024-10-27T11:40:37.261Z] Copying: 65/1024 [MB] (30 MBps) [2024-10-27T11:40:38.200Z] Copying: 86/1024 [MB] (21 MBps) [2024-10-27T11:40:39.585Z] Copying: 104/1024 [MB] (17 MBps) [2024-10-27T11:40:40.153Z] Copying: 115/1024 [MB] (11 MBps) [2024-10-27T11:40:41.536Z] Copying: 127/1024 [MB] (11 MBps) [2024-10-27T11:40:42.476Z] Copying: 139/1024 [MB] (11 MBps) [2024-10-27T11:40:43.416Z] Copying: 157/1024 [MB] (18 MBps) [2024-10-27T11:40:44.410Z] Copying: 173/1024 [MB] (15 MBps) [2024-10-27T11:40:45.351Z] Copying: 187/1024 [MB] (14 MBps) [2024-10-27T11:40:46.292Z] Copying: 206/1024 [MB] (19 MBps) [2024-10-27T11:40:47.233Z] Copying: 226/1024 [MB] (19 MBps) [2024-10-27T11:40:48.175Z] Copying: 245/1024 [MB] (18 MBps) [2024-10-27T11:40:49.561Z] Copying: 263/1024 [MB] (17 MBps) [2024-10-27T11:40:50.512Z] Copying: 273/1024 [MB] (10 MBps) [2024-10-27T11:40:51.452Z] Copying: 289/1024 [MB] (15 MBps) [2024-10-27T11:40:52.394Z] Copying: 303/1024 [MB] (13 MBps) [2024-10-27T11:40:53.337Z] Copying: 319/1024 [MB] (15 MBps) [2024-10-27T11:40:54.280Z] Copying: 330/1024 [MB] (10 MBps) [2024-10-27T11:40:55.224Z] Copying: 342/1024 [MB] (12 MBps) [2024-10-27T11:40:56.167Z] Copying: 353/1024 [MB] (10 MBps) [2024-10-27T11:40:57.555Z] Copying: 371/1024 [MB] (18 MBps) [2024-10-27T11:40:58.496Z] Copying: 385/1024 [MB] (13 MBps) [2024-10-27T11:40:59.440Z] Copying: 400/1024 [MB] (15 MBps) [2024-10-27T11:41:00.380Z] Copying: 418/1024 [MB] (18 MBps) [2024-10-27T11:41:01.317Z] Copying: 433/1024 [MB] (14 MBps) [2024-10-27T11:41:02.260Z] Copying: 444/1024 [MB] (10 MBps) [2024-10-27T11:41:03.206Z] Copying: 455/1024 [MB] (11 MBps) [2024-10-27T11:41:04.594Z] Copying: 469/1024 [MB] (14 MBps) [2024-10-27T11:41:05.165Z] Copying: 492/1024 [MB] (23 MBps) [2024-10-27T11:41:06.551Z] Copying: 504/1024 [MB] (11 MBps) [2024-10-27T11:41:07.491Z] Copying: 515/1024 [MB] (11 MBps) [2024-10-27T11:41:08.432Z] Copying: 529/1024 [MB] (13 MBps) [2024-10-27T11:41:09.374Z] Copying: 546/1024 [MB] (16 MBps) [2024-10-27T11:41:10.315Z] Copying: 557/1024 [MB] (11 MBps) [2024-10-27T11:41:11.256Z] Copying: 569/1024 [MB] (11 MBps) [2024-10-27T11:41:12.197Z] Copying: 589/1024 [MB] (19 MBps) [2024-10-27T11:41:13.591Z] Copying: 604/1024 [MB] (15 MBps) [2024-10-27T11:41:14.246Z] Copying: 620/1024 [MB] (16 MBps) [2024-10-27T11:41:15.189Z] Copying: 635/1024 [MB] (14 MBps) [2024-10-27T11:41:16.572Z] Copying: 648/1024 [MB] (13 MBps) [2024-10-27T11:41:17.516Z] Copying: 664/1024 [MB] (16 MBps) [2024-10-27T11:41:18.459Z] Copying: 677/1024 [MB] (13 MBps) [2024-10-27T11:41:19.401Z] Copying: 693/1024 [MB] (15 MBps) [2024-10-27T11:41:20.341Z] Copying: 711/1024 [MB] (18 MBps) [2024-10-27T11:41:21.283Z] Copying: 728/1024 [MB] (17 MBps) [2024-10-27T11:41:22.228Z] Copying: 743/1024 [MB] (15 MBps) [2024-10-27T11:41:23.179Z] Copying: 754/1024 [MB] (10 MBps) [2024-10-27T11:41:24.566Z] Copying: 764/1024 [MB] (10 MBps) [2024-10-27T11:41:25.512Z] Copying: 775/1024 [MB] (10 MBps) [2024-10-27T11:41:26.456Z] Copying: 785/1024 [MB] (10 MBps) [2024-10-27T11:41:27.401Z] Copying: 795/1024 [MB] (10 MBps) [2024-10-27T11:41:28.346Z] Copying: 806/1024 [MB] (10 MBps) [2024-10-27T11:41:29.291Z] Copying: 817/1024 [MB] (10 MBps) [2024-10-27T11:41:30.234Z] Copying: 827/1024 [MB] (10 MBps) [2024-10-27T11:41:31.179Z] Copying: 847/1024 [MB] (20 MBps) [2024-10-27T11:41:32.568Z] Copying: 857/1024 [MB] (10 MBps) [2024-10-27T11:41:33.513Z] Copying: 868/1024 [MB] (10 MBps) [2024-10-27T11:41:34.457Z] Copying: 878/1024 [MB] (10 MBps) [2024-10-27T11:41:35.403Z] Copying: 889/1024 [MB] (10 MBps) [2024-10-27T11:41:36.347Z] Copying: 910/1024 [MB] (20 MBps) [2024-10-27T11:41:37.290Z] Copying: 922/1024 [MB] (12 MBps) [2024-10-27T11:41:38.236Z] Copying: 933/1024 [MB] (11 MBps) [2024-10-27T11:41:39.181Z] Copying: 951/1024 [MB] (18 MBps) [2024-10-27T11:41:40.570Z] Copying: 962/1024 [MB] (10 MBps) [2024-10-27T11:41:41.514Z] Copying: 973/1024 [MB] (11 MBps) [2024-10-27T11:41:42.481Z] Copying: 984/1024 [MB] (11 MBps) [2024-10-27T11:41:43.435Z] Copying: 1000/1024 [MB] (16 MBps) [2024-10-27T11:41:43.435Z] Copying: 1022/1024 [MB] (21 MBps) [2024-10-27T11:41:43.435Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-10-27 11:41:43.338429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.154 [2024-10-27 11:41:43.338519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:58.154 [2024-10-27 11:41:43.338537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:58.154 [2024-10-27 11:41:43.338548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.154 [2024-10-27 11:41:43.338574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:58.154 [2024-10-27 11:41:43.342817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.154 [2024-10-27 11:41:43.342871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:58.154 [2024-10-27 11:41:43.342886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.222 ms 00:26:58.154 [2024-10-27 11:41:43.342903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.154 [2024-10-27 11:41:43.343176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.154 [2024-10-27 11:41:43.343189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:58.154 [2024-10-27 11:41:43.343202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:26:58.154 [2024-10-27 11:41:43.343211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.154 [2024-10-27 11:41:43.346990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.154 [2024-10-27 11:41:43.347017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:58.154 [2024-10-27 11:41:43.347028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.763 ms 00:26:58.154 [2024-10-27 11:41:43.347037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.154 [2024-10-27 11:41:43.353288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.154 [2024-10-27 11:41:43.353339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:58.154 [2024-10-27 11:41:43.353351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.227 ms 00:26:58.154 [2024-10-27 11:41:43.353360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.154 [2024-10-27 11:41:43.381231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.154 [2024-10-27 11:41:43.381480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:58.154 [2024-10-27 11:41:43.381504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.795 ms 00:26:58.154 [2024-10-27 11:41:43.381514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.154 [2024-10-27 11:41:43.398479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.154 [2024-10-27 11:41:43.398530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:58.154 [2024-10-27 11:41:43.398544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.828 ms 00:26:58.154 [2024-10-27 11:41:43.398553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.154 [2024-10-27 11:41:43.403463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.154 [2024-10-27 11:41:43.403515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:58.154 [2024-10-27 11:41:43.403536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.849 ms 00:26:58.154 [2024-10-27 11:41:43.403545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.154 [2024-10-27 11:41:43.430284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.154 [2024-10-27 11:41:43.430347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:58.154 [2024-10-27 11:41:43.430359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.721 ms 00:26:58.154 [2024-10-27 11:41:43.430367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.416 [2024-10-27 11:41:43.456635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.416 [2024-10-27 11:41:43.456865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:58.416 [2024-10-27 11:41:43.456887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.218 ms 00:26:58.416 [2024-10-27 11:41:43.456896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.416 [2024-10-27 11:41:43.482143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.416 [2024-10-27 11:41:43.482192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:58.416 [2024-10-27 11:41:43.482204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.205 ms 00:26:58.416 [2024-10-27 11:41:43.482212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.416 [2024-10-27 11:41:43.507783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.416 [2024-10-27 11:41:43.507977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:58.416 [2024-10-27 11:41:43.508000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.491 ms 00:26:58.416 [2024-10-27 11:41:43.508008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.416 [2024-10-27 11:41:43.508048] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:58.416 [2024-10-27 11:41:43.508066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:58.416 [2024-10-27 11:41:43.508085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:58.416 [2024-10-27 11:41:43.508095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:58.416 [2024-10-27 11:41:43.508428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:58.417 [2024-10-27 11:41:43.508918] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:58.417 [2024-10-27 11:41:43.508927] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: adc2a391-0658-4d35-80d0-3e1990da8c64 00:26:58.417 [2024-10-27 11:41:43.508939] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:58.417 [2024-10-27 11:41:43.508947] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:58.417 [2024-10-27 11:41:43.508954] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:58.417 [2024-10-27 11:41:43.508962] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:58.417 [2024-10-27 11:41:43.508971] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:58.417 [2024-10-27 11:41:43.508980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:58.417 [2024-10-27 11:41:43.508995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:58.417 [2024-10-27 11:41:43.509002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:58.417 [2024-10-27 11:41:43.509009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:58.417 [2024-10-27 11:41:43.509017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.417 [2024-10-27 11:41:43.509025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:58.417 [2024-10-27 11:41:43.509036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:26:58.417 [2024-10-27 11:41:43.509044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.417 [2024-10-27 11:41:43.522868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.417 [2024-10-27 11:41:43.523046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:58.417 [2024-10-27 11:41:43.523065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.788 ms 00:26:58.418 [2024-10-27 11:41:43.523074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.418 [2024-10-27 11:41:43.523499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:58.418 [2024-10-27 11:41:43.523520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:58.418 [2024-10-27 11:41:43.523530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:26:58.418 [2024-10-27 11:41:43.523547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.418 [2024-10-27 11:41:43.560463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.418 [2024-10-27 11:41:43.560655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:58.418 [2024-10-27 11:41:43.560677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.418 [2024-10-27 11:41:43.560689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.418 [2024-10-27 11:41:43.560755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.418 [2024-10-27 11:41:43.560766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:58.418 [2024-10-27 11:41:43.560776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.418 [2024-10-27 11:41:43.560793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.418 [2024-10-27 11:41:43.560887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.418 [2024-10-27 11:41:43.560899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:58.418 [2024-10-27 11:41:43.560908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.418 [2024-10-27 11:41:43.560917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.418 [2024-10-27 11:41:43.560935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.418 [2024-10-27 11:41:43.560944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:58.418 [2024-10-27 11:41:43.560952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.418 [2024-10-27 11:41:43.560961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.418 [2024-10-27 11:41:43.649654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.418 [2024-10-27 11:41:43.649731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:58.418 [2024-10-27 11:41:43.649747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.418 [2024-10-27 11:41:43.649757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.679 [2024-10-27 11:41:43.720795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.679 [2024-10-27 11:41:43.720856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:58.679 [2024-10-27 11:41:43.720869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.679 [2024-10-27 11:41:43.720886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.679 [2024-10-27 11:41:43.720957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.679 [2024-10-27 11:41:43.720968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:58.679 [2024-10-27 11:41:43.720977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.679 [2024-10-27 11:41:43.720985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.679 [2024-10-27 11:41:43.721048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.679 [2024-10-27 11:41:43.721061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:58.679 [2024-10-27 11:41:43.721070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.679 [2024-10-27 11:41:43.721078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.679 [2024-10-27 11:41:43.721199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.679 [2024-10-27 11:41:43.721211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:58.679 [2024-10-27 11:41:43.721220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.679 [2024-10-27 11:41:43.721229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.679 [2024-10-27 11:41:43.721263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.679 [2024-10-27 11:41:43.721275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:58.679 [2024-10-27 11:41:43.721283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.679 [2024-10-27 11:41:43.721328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.679 [2024-10-27 11:41:43.721374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.679 [2024-10-27 11:41:43.721385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:58.679 [2024-10-27 11:41:43.721396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.679 [2024-10-27 11:41:43.721404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.679 [2024-10-27 11:41:43.721451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:58.679 [2024-10-27 11:41:43.721461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:58.679 [2024-10-27 11:41:43.721473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:58.679 [2024-10-27 11:41:43.721483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:58.679 [2024-10-27 11:41:43.721619] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 383.158 ms, result 0 00:26:59.250 00:26:59.250 00:26:59.250 11:41:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:01.800 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:01.800 Process with pid 77454 is not found 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77454 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77454 ']' 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 77454 00:27:01.800 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (77454) - No such process 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 77454 is not found' 00:27:01.800 11:41:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:01.800 Remove shared memory files 00:27:01.800 11:41:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:01.800 11:41:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:01.800 11:41:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:01.800 11:41:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:01.800 11:41:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:01.800 11:41:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:01.800 11:41:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:02.062 ************************************ 00:27:02.062 END TEST ftl_dirty_shutdown 00:27:02.062 ************************************ 00:27:02.062 00:27:02.062 real 4m3.961s 00:27:02.062 user 4m29.498s 00:27:02.062 sys 0m26.489s 00:27:02.062 11:41:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:02.062 11:41:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:02.062 11:41:47 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:02.062 11:41:47 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:02.062 11:41:47 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:02.062 11:41:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:02.062 ************************************ 00:27:02.062 START TEST ftl_upgrade_shutdown 00:27:02.062 ************************************ 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:02.062 * Looking for test storage... 00:27:02.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:27:02.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.062 --rc genhtml_branch_coverage=1 00:27:02.062 --rc genhtml_function_coverage=1 00:27:02.062 --rc genhtml_legend=1 00:27:02.062 --rc geninfo_all_blocks=1 00:27:02.062 --rc geninfo_unexecuted_blocks=1 00:27:02.062 00:27:02.062 ' 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:27:02.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.062 --rc genhtml_branch_coverage=1 00:27:02.062 --rc genhtml_function_coverage=1 00:27:02.062 --rc genhtml_legend=1 00:27:02.062 --rc geninfo_all_blocks=1 00:27:02.062 --rc geninfo_unexecuted_blocks=1 00:27:02.062 00:27:02.062 ' 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:27:02.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.062 --rc genhtml_branch_coverage=1 00:27:02.062 --rc genhtml_function_coverage=1 00:27:02.062 --rc genhtml_legend=1 00:27:02.062 --rc geninfo_all_blocks=1 00:27:02.062 --rc geninfo_unexecuted_blocks=1 00:27:02.062 00:27:02.062 ' 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:27:02.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.062 --rc genhtml_branch_coverage=1 00:27:02.062 --rc genhtml_function_coverage=1 00:27:02.062 --rc genhtml_legend=1 00:27:02.062 --rc geninfo_all_blocks=1 00:27:02.062 --rc geninfo_unexecuted_blocks=1 00:27:02.062 00:27:02.062 ' 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:02.062 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80084 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80084 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80084 ']' 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.063 11:41:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:02.324 [2024-10-27 11:41:47.409766] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:27:02.324 [2024-10-27 11:41:47.409914] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80084 ] 00:27:02.324 [2024-10-27 11:41:47.573955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.584 [2024-10-27 11:41:47.695408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:03.155 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:03.417 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:03.417 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:03.417 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:03.417 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:03.417 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:03.417 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:03.417 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:03.417 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:03.678 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:03.678 { 00:27:03.678 "name": "basen1", 00:27:03.678 "aliases": [ 00:27:03.678 "0ce14a64-0959-4175-8860-22fdf9ccf02a" 00:27:03.678 ], 00:27:03.678 "product_name": "NVMe disk", 00:27:03.678 "block_size": 4096, 00:27:03.678 "num_blocks": 1310720, 00:27:03.678 "uuid": "0ce14a64-0959-4175-8860-22fdf9ccf02a", 00:27:03.678 "numa_id": -1, 00:27:03.678 "assigned_rate_limits": { 00:27:03.678 "rw_ios_per_sec": 0, 00:27:03.678 "rw_mbytes_per_sec": 0, 00:27:03.678 "r_mbytes_per_sec": 0, 00:27:03.678 "w_mbytes_per_sec": 0 00:27:03.678 }, 00:27:03.678 "claimed": true, 00:27:03.678 "claim_type": "read_many_write_one", 00:27:03.678 "zoned": false, 00:27:03.678 "supported_io_types": { 00:27:03.678 "read": true, 00:27:03.678 "write": true, 00:27:03.678 "unmap": true, 00:27:03.678 "flush": true, 00:27:03.678 "reset": true, 00:27:03.678 "nvme_admin": true, 00:27:03.678 "nvme_io": true, 00:27:03.678 "nvme_io_md": false, 00:27:03.678 "write_zeroes": true, 00:27:03.678 "zcopy": false, 00:27:03.678 "get_zone_info": false, 00:27:03.678 "zone_management": false, 00:27:03.678 "zone_append": false, 00:27:03.678 "compare": true, 00:27:03.678 "compare_and_write": false, 00:27:03.678 "abort": true, 00:27:03.678 "seek_hole": false, 00:27:03.678 "seek_data": false, 00:27:03.678 "copy": true, 00:27:03.678 "nvme_iov_md": false 00:27:03.678 }, 00:27:03.678 "driver_specific": { 00:27:03.678 "nvme": [ 00:27:03.678 { 00:27:03.678 "pci_address": "0000:00:11.0", 00:27:03.678 "trid": { 00:27:03.678 "trtype": "PCIe", 00:27:03.678 "traddr": "0000:00:11.0" 00:27:03.678 }, 00:27:03.678 "ctrlr_data": { 00:27:03.678 "cntlid": 0, 00:27:03.678 "vendor_id": "0x1b36", 00:27:03.678 "model_number": "QEMU NVMe Ctrl", 00:27:03.678 "serial_number": "12341", 00:27:03.678 "firmware_revision": "8.0.0", 00:27:03.678 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:03.678 "oacs": { 00:27:03.678 "security": 0, 00:27:03.678 "format": 1, 00:27:03.678 "firmware": 0, 00:27:03.678 "ns_manage": 1 00:27:03.678 }, 00:27:03.678 "multi_ctrlr": false, 00:27:03.678 "ana_reporting": false 00:27:03.678 }, 00:27:03.678 "vs": { 00:27:03.678 "nvme_version": "1.4" 00:27:03.678 }, 00:27:03.678 "ns_data": { 00:27:03.678 "id": 1, 00:27:03.678 "can_share": false 00:27:03.678 } 00:27:03.678 } 00:27:03.678 ], 00:27:03.678 "mp_policy": "active_passive" 00:27:03.678 } 00:27:03.678 } 00:27:03.678 ]' 00:27:03.678 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:03.678 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:03.678 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:03.939 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:03.939 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:03.939 11:41:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:03.939 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:03.939 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:03.939 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:03.939 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:03.939 11:41:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:03.940 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=b62ad7ec-197d-4346-9b2b-193c9c93cd15 00:27:03.940 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:03.940 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b62ad7ec-197d-4346-9b2b-193c9c93cd15 00:27:04.201 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:04.462 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=cc0f9fc9-fe87-42c7-8ad4-44076ec0c3d5 00:27:04.462 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u cc0f9fc9-fe87-42c7-8ad4-44076ec0c3d5 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=81b64408-7a51-4ada-abf5-9eb920b2728d 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 81b64408-7a51-4ada-abf5-9eb920b2728d ]] 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 81b64408-7a51-4ada-abf5-9eb920b2728d 5120 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=81b64408-7a51-4ada-abf5-9eb920b2728d 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 81b64408-7a51-4ada-abf5-9eb920b2728d 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=81b64408-7a51-4ada-abf5-9eb920b2728d 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81b64408-7a51-4ada-abf5-9eb920b2728d 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:04.723 { 00:27:04.723 "name": "81b64408-7a51-4ada-abf5-9eb920b2728d", 00:27:04.723 "aliases": [ 00:27:04.723 "lvs/basen1p0" 00:27:04.723 ], 00:27:04.723 "product_name": "Logical Volume", 00:27:04.723 "block_size": 4096, 00:27:04.723 "num_blocks": 5242880, 00:27:04.723 "uuid": "81b64408-7a51-4ada-abf5-9eb920b2728d", 00:27:04.723 "assigned_rate_limits": { 00:27:04.723 "rw_ios_per_sec": 0, 00:27:04.723 "rw_mbytes_per_sec": 0, 00:27:04.723 "r_mbytes_per_sec": 0, 00:27:04.723 "w_mbytes_per_sec": 0 00:27:04.723 }, 00:27:04.723 "claimed": false, 00:27:04.723 "zoned": false, 00:27:04.723 "supported_io_types": { 00:27:04.723 "read": true, 00:27:04.723 "write": true, 00:27:04.723 "unmap": true, 00:27:04.723 "flush": false, 00:27:04.723 "reset": true, 00:27:04.723 "nvme_admin": false, 00:27:04.723 "nvme_io": false, 00:27:04.723 "nvme_io_md": false, 00:27:04.723 "write_zeroes": true, 00:27:04.723 "zcopy": false, 00:27:04.723 "get_zone_info": false, 00:27:04.723 "zone_management": false, 00:27:04.723 "zone_append": false, 00:27:04.723 "compare": false, 00:27:04.723 "compare_and_write": false, 00:27:04.723 "abort": false, 00:27:04.723 "seek_hole": true, 00:27:04.723 "seek_data": true, 00:27:04.723 "copy": false, 00:27:04.723 "nvme_iov_md": false 00:27:04.723 }, 00:27:04.723 "driver_specific": { 00:27:04.723 "lvol": { 00:27:04.723 "lvol_store_uuid": "cc0f9fc9-fe87-42c7-8ad4-44076ec0c3d5", 00:27:04.723 "base_bdev": "basen1", 00:27:04.723 "thin_provision": true, 00:27:04.723 "num_allocated_clusters": 0, 00:27:04.723 "snapshot": false, 00:27:04.723 "clone": false, 00:27:04.723 "esnap_clone": false 00:27:04.723 } 00:27:04.723 } 00:27:04.723 } 00:27:04.723 ]' 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:04.723 11:41:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:04.984 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:04.984 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:04.984 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:05.246 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:05.246 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:05.246 11:41:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 81b64408-7a51-4ada-abf5-9eb920b2728d -c cachen1p0 --l2p_dram_limit 2 00:27:05.508 [2024-10-27 11:41:50.596173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.508 [2024-10-27 11:41:50.596219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:05.508 [2024-10-27 11:41:50.596235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:05.508 [2024-10-27 11:41:50.596244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.508 [2024-10-27 11:41:50.596316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.508 [2024-10-27 11:41:50.596327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:05.508 [2024-10-27 11:41:50.596337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:05.508 [2024-10-27 11:41:50.596345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.508 [2024-10-27 11:41:50.596366] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:05.508 [2024-10-27 11:41:50.597093] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:05.508 [2024-10-27 11:41:50.597117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.508 [2024-10-27 11:41:50.597125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:05.508 [2024-10-27 11:41:50.597135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.754 ms 00:27:05.508 [2024-10-27 11:41:50.597151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.508 [2024-10-27 11:41:50.597185] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 08fea97b-55b0-4a3b-8fbf-5c6fef8e948f 00:27:05.508 [2024-10-27 11:41:50.598317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.508 [2024-10-27 11:41:50.598352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:05.508 [2024-10-27 11:41:50.598362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:05.508 [2024-10-27 11:41:50.598372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.508 [2024-10-27 11:41:50.603738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.508 [2024-10-27 11:41:50.603771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:05.508 [2024-10-27 11:41:50.603781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.318 ms 00:27:05.508 [2024-10-27 11:41:50.603792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.508 [2024-10-27 11:41:50.603872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.508 [2024-10-27 11:41:50.603884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:05.508 [2024-10-27 11:41:50.603892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:05.508 [2024-10-27 11:41:50.603904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.508 [2024-10-27 11:41:50.603952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.508 [2024-10-27 11:41:50.603964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:05.508 [2024-10-27 11:41:50.603972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:05.508 [2024-10-27 11:41:50.603982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.508 [2024-10-27 11:41:50.604006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:05.508 [2024-10-27 11:41:50.607630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.508 [2024-10-27 11:41:50.607660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:05.508 [2024-10-27 11:41:50.607672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.629 ms 00:27:05.508 [2024-10-27 11:41:50.607681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.508 [2024-10-27 11:41:50.607707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.508 [2024-10-27 11:41:50.607715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:05.508 [2024-10-27 11:41:50.607724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:05.508 [2024-10-27 11:41:50.607731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.508 [2024-10-27 11:41:50.607748] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:05.508 [2024-10-27 11:41:50.607879] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:05.508 [2024-10-27 11:41:50.607894] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:05.508 [2024-10-27 11:41:50.607905] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:05.508 [2024-10-27 11:41:50.607916] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:05.508 [2024-10-27 11:41:50.607925] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:05.508 [2024-10-27 11:41:50.607934] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:05.508 [2024-10-27 11:41:50.607941] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:05.508 [2024-10-27 11:41:50.607950] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:05.508 [2024-10-27 11:41:50.607956] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:05.508 [2024-10-27 11:41:50.607968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.508 [2024-10-27 11:41:50.607975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:05.508 [2024-10-27 11:41:50.607984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.221 ms 00:27:05.508 [2024-10-27 11:41:50.607991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.508 [2024-10-27 11:41:50.608077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.508 [2024-10-27 11:41:50.608085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:05.508 [2024-10-27 11:41:50.608094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:27:05.508 [2024-10-27 11:41:50.608106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.508 [2024-10-27 11:41:50.608215] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:05.509 [2024-10-27 11:41:50.608226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:05.509 [2024-10-27 11:41:50.608236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:05.509 [2024-10-27 11:41:50.608243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:05.509 [2024-10-27 11:41:50.608258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:05.509 [2024-10-27 11:41:50.608273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:05.509 [2024-10-27 11:41:50.608281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:05.509 [2024-10-27 11:41:50.608287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:05.509 [2024-10-27 11:41:50.608313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:05.509 [2024-10-27 11:41:50.608321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:05.509 [2024-10-27 11:41:50.608335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:05.509 [2024-10-27 11:41:50.608342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:05.509 [2024-10-27 11:41:50.608358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:05.509 [2024-10-27 11:41:50.608368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:05.509 [2024-10-27 11:41:50.608385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:05.509 [2024-10-27 11:41:50.608391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.509 [2024-10-27 11:41:50.608400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:05.509 [2024-10-27 11:41:50.608406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:05.509 [2024-10-27 11:41:50.608414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.509 [2024-10-27 11:41:50.608421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:05.509 [2024-10-27 11:41:50.608429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:05.509 [2024-10-27 11:41:50.608435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.509 [2024-10-27 11:41:50.608443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:05.509 [2024-10-27 11:41:50.608449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:05.509 [2024-10-27 11:41:50.608457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:05.509 [2024-10-27 11:41:50.608464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:05.509 [2024-10-27 11:41:50.608473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:05.509 [2024-10-27 11:41:50.608479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:05.509 [2024-10-27 11:41:50.608494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:05.509 [2024-10-27 11:41:50.608504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:05.509 [2024-10-27 11:41:50.608518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:05.509 [2024-10-27 11:41:50.608539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:05.509 [2024-10-27 11:41:50.608546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608552] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:05.509 [2024-10-27 11:41:50.608561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:05.509 [2024-10-27 11:41:50.608568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:05.509 [2024-10-27 11:41:50.608579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:05.509 [2024-10-27 11:41:50.608586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:05.509 [2024-10-27 11:41:50.608596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:05.509 [2024-10-27 11:41:50.608602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:05.509 [2024-10-27 11:41:50.608610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:05.509 [2024-10-27 11:41:50.608619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:05.509 [2024-10-27 11:41:50.608627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:05.509 [2024-10-27 11:41:50.608637] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:05.509 [2024-10-27 11:41:50.608647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.509 [2024-10-27 11:41:50.608655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:05.509 [2024-10-27 11:41:50.608664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:05.509 [2024-10-27 11:41:50.608671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:05.509 [2024-10-27 11:41:50.608679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:05.509 [2024-10-27 11:41:50.608686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:05.509 [2024-10-27 11:41:50.608695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:05.509 [2024-10-27 11:41:50.608702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:05.509 [2024-10-27 11:41:50.608711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:05.509 [2024-10-27 11:41:50.608718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:05.509 [2024-10-27 11:41:50.608728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:05.509 [2024-10-27 11:41:50.608734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:05.509 [2024-10-27 11:41:50.608742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:05.509 [2024-10-27 11:41:50.608749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:05.509 [2024-10-27 11:41:50.608758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:05.509 [2024-10-27 11:41:50.608765] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:05.509 [2024-10-27 11:41:50.608775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:05.509 [2024-10-27 11:41:50.608785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:05.509 [2024-10-27 11:41:50.608794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:05.509 [2024-10-27 11:41:50.608801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:05.509 [2024-10-27 11:41:50.608810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:05.509 [2024-10-27 11:41:50.608817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:05.509 [2024-10-27 11:41:50.608825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:05.509 [2024-10-27 11:41:50.608833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.672 ms 00:27:05.509 [2024-10-27 11:41:50.608841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:05.509 [2024-10-27 11:41:50.608878] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:05.509 [2024-10-27 11:41:50.608890] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:08.812 [2024-10-27 11:41:53.814817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.812 [2024-10-27 11:41:53.814875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:08.812 [2024-10-27 11:41:53.814890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3205.925 ms 00:27:08.812 [2024-10-27 11:41:53.814901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.812 [2024-10-27 11:41:53.841354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.812 [2024-10-27 11:41:53.841406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:08.812 [2024-10-27 11:41:53.841418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.225 ms 00:27:08.812 [2024-10-27 11:41:53.841427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.812 [2024-10-27 11:41:53.841500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.812 [2024-10-27 11:41:53.841512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:08.812 [2024-10-27 11:41:53.841521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:08.812 [2024-10-27 11:41:53.841532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.812 [2024-10-27 11:41:53.875230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.812 [2024-10-27 11:41:53.875278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:08.812 [2024-10-27 11:41:53.875290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.662 ms 00:27:08.812 [2024-10-27 11:41:53.875317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.813 [2024-10-27 11:41:53.875352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.813 [2024-10-27 11:41:53.875364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:08.813 [2024-10-27 11:41:53.875373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:08.813 [2024-10-27 11:41:53.875385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.813 [2024-10-27 11:41:53.875822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.813 [2024-10-27 11:41:53.875859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:08.813 [2024-10-27 11:41:53.875870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.372 ms 00:27:08.813 [2024-10-27 11:41:53.875879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.813 [2024-10-27 11:41:53.875927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.813 [2024-10-27 11:41:53.875937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:08.813 [2024-10-27 11:41:53.875945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:08.813 [2024-10-27 11:41:53.875957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.813 [2024-10-27 11:41:53.891529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.813 [2024-10-27 11:41:53.891576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:08.813 [2024-10-27 11:41:53.891588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.551 ms 00:27:08.813 [2024-10-27 11:41:53.891601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.813 [2024-10-27 11:41:53.904059] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:08.813 [2024-10-27 11:41:53.905225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.813 [2024-10-27 11:41:53.905261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:08.813 [2024-10-27 11:41:53.905274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.538 ms 00:27:08.813 [2024-10-27 11:41:53.905283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.813 [2024-10-27 11:41:53.940951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.813 [2024-10-27 11:41:53.941012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:08.813 [2024-10-27 11:41:53.941032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.529 ms 00:27:08.813 [2024-10-27 11:41:53.941041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.813 [2024-10-27 11:41:53.941165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.813 [2024-10-27 11:41:53.941177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:08.813 [2024-10-27 11:41:53.941192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:27:08.813 [2024-10-27 11:41:53.941203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.813 [2024-10-27 11:41:53.966867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.813 [2024-10-27 11:41:53.966924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:08.813 [2024-10-27 11:41:53.966940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.605 ms 00:27:08.813 [2024-10-27 11:41:53.966949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.813 [2024-10-27 11:41:53.992533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.813 [2024-10-27 11:41:53.992584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:08.813 [2024-10-27 11:41:53.992599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.524 ms 00:27:08.813 [2024-10-27 11:41:53.992607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.813 [2024-10-27 11:41:53.993252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.813 [2024-10-27 11:41:53.993391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:08.813 [2024-10-27 11:41:53.993405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.592 ms 00:27:08.813 [2024-10-27 11:41:53.993413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.813 [2024-10-27 11:41:54.079821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.813 [2024-10-27 11:41:54.079877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:08.813 [2024-10-27 11:41:54.079898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.352 ms 00:27:08.813 [2024-10-27 11:41:54.079907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.074 [2024-10-27 11:41:54.108276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.074 [2024-10-27 11:41:54.108343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:09.074 [2024-10-27 11:41:54.108371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.262 ms 00:27:09.074 [2024-10-27 11:41:54.108379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.074 [2024-10-27 11:41:54.135266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.074 [2024-10-27 11:41:54.135323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:09.074 [2024-10-27 11:41:54.135338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.824 ms 00:27:09.074 [2024-10-27 11:41:54.135345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.074 [2024-10-27 11:41:54.162232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.074 [2024-10-27 11:41:54.162284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:09.074 [2024-10-27 11:41:54.162307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.827 ms 00:27:09.074 [2024-10-27 11:41:54.162316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.074 [2024-10-27 11:41:54.162377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.074 [2024-10-27 11:41:54.162387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:09.074 [2024-10-27 11:41:54.162403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:09.074 [2024-10-27 11:41:54.162411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.074 [2024-10-27 11:41:54.162506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.074 [2024-10-27 11:41:54.162517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:09.074 [2024-10-27 11:41:54.162528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:09.074 [2024-10-27 11:41:54.162536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.074 [2024-10-27 11:41:54.163833] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3567.172 ms, result 0 00:27:09.074 { 00:27:09.074 "name": "ftl", 00:27:09.074 "uuid": "08fea97b-55b0-4a3b-8fbf-5c6fef8e948f" 00:27:09.074 } 00:27:09.074 11:41:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:09.335 [2024-10-27 11:41:54.386818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.335 11:41:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:09.596 11:41:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:09.596 [2024-10-27 11:41:54.807273] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:09.596 11:41:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:09.858 [2024-10-27 11:41:55.032534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:09.858 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:10.119 Fill FTL, iteration 1 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80207 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80207 /var/tmp/spdk.tgt.sock 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80207 ']' 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:10.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:10.119 11:41:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:10.380 [2024-10-27 11:41:55.457133] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:27:10.380 [2024-10-27 11:41:55.457287] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80207 ] 00:27:10.380 [2024-10-27 11:41:55.616556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.642 [2024-10-27 11:41:55.709477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.214 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:11.214 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:11.214 11:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:11.476 ftln1 00:27:11.476 11:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:11.476 11:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:11.476 11:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:11.476 11:41:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80207 00:27:11.476 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80207 ']' 00:27:11.476 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80207 00:27:11.476 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:11.476 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:11.476 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80207 00:27:11.737 killing process with pid 80207 00:27:11.737 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:11.737 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:11.737 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80207' 00:27:11.737 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80207 00:27:11.737 11:41:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80207 00:27:13.123 11:41:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:13.123 11:41:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:13.123 [2024-10-27 11:41:58.216996] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:27:13.123 [2024-10-27 11:41:58.217120] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80251 ] 00:27:13.123 [2024-10-27 11:41:58.371974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.384 [2024-10-27 11:41:58.446333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.767  [2024-10-27T11:42:00.990Z] Copying: 259/1024 [MB] (259 MBps) [2024-10-27T11:42:01.933Z] Copying: 526/1024 [MB] (267 MBps) [2024-10-27T11:42:02.877Z] Copying: 792/1024 [MB] (266 MBps) [2024-10-27T11:42:03.449Z] Copying: 1024/1024 [MB] (average 263 MBps) 00:27:18.168 00:27:18.168 Calculate MD5 checksum, iteration 1 00:27:18.168 11:42:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:18.168 11:42:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:18.168 11:42:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:18.168 11:42:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:18.168 11:42:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:18.168 11:42:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:18.168 11:42:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:18.168 11:42:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:18.168 [2024-10-27 11:42:03.264249] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:27:18.168 [2024-10-27 11:42:03.264387] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80308 ] 00:27:18.168 [2024-10-27 11:42:03.419806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.429 [2024-10-27 11:42:03.496283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.815  [2024-10-27T11:42:05.358Z] Copying: 722/1024 [MB] (722 MBps) [2024-10-27T11:42:06.304Z] Copying: 1024/1024 [MB] (average 670 MBps) 00:27:21.023 00:27:21.023 11:42:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:21.023 11:42:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:22.935 11:42:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:22.935 Fill FTL, iteration 2 00:27:22.935 11:42:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fa69ca03a09dbdd860c28cd53ca01ae3 00:27:22.935 11:42:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:22.935 11:42:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:22.935 11:42:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:22.935 11:42:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:22.935 11:42:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:22.935 11:42:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:22.935 11:42:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:22.935 11:42:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:22.935 11:42:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:22.935 [2024-10-27 11:42:08.202062] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:27:22.935 [2024-10-27 11:42:08.202185] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80359 ] 00:27:23.195 [2024-10-27 11:42:08.361303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.195 [2024-10-27 11:42:08.466318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.575  [2024-10-27T11:42:11.244Z] Copying: 193/1024 [MB] (193 MBps) [2024-10-27T11:42:11.850Z] Copying: 373/1024 [MB] (180 MBps) [2024-10-27T11:42:13.224Z] Copying: 577/1024 [MB] (204 MBps) [2024-10-27T11:42:13.790Z] Copying: 812/1024 [MB] (235 MBps) [2024-10-27T11:42:14.726Z] Copying: 1024/1024 [MB] (average 208 MBps) 00:27:29.445 00:27:29.445 Calculate MD5 checksum, iteration 2 00:27:29.445 11:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:29.445 11:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:29.445 11:42:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:29.445 11:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:29.445 11:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:29.445 11:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:29.445 11:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:29.445 11:42:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:29.445 [2024-10-27 11:42:14.440104] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:27:29.445 [2024-10-27 11:42:14.440226] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80429 ] 00:27:29.445 [2024-10-27 11:42:14.596023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.445 [2024-10-27 11:42:14.695388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.346  [2024-10-27T11:42:16.885Z] Copying: 648/1024 [MB] (648 MBps) [2024-10-27T11:42:17.821Z] Copying: 1024/1024 [MB] (average 628 MBps) 00:27:32.540 00:27:32.540 11:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:32.540 11:42:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:35.079 11:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:35.079 11:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=66d30c913d3428045dcc000bb5627238 00:27:35.079 11:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:35.079 11:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:35.079 11:42:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:35.079 [2024-10-27 11:42:19.997139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.079 [2024-10-27 11:42:19.997194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:35.079 [2024-10-27 11:42:19.997206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:35.079 [2024-10-27 11:42:19.997212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.079 [2024-10-27 11:42:19.997233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.079 [2024-10-27 11:42:19.997240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:35.079 [2024-10-27 11:42:19.997247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:35.079 [2024-10-27 11:42:19.997253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.079 [2024-10-27 11:42:19.997271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.079 [2024-10-27 11:42:19.997277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:35.079 [2024-10-27 11:42:19.997283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.079 [2024-10-27 11:42:19.997289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.079 [2024-10-27 11:42:19.997349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.202 ms, result 0 00:27:35.079 true 00:27:35.079 11:42:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.079 { 00:27:35.079 "name": "ftl", 00:27:35.079 "properties": [ 00:27:35.079 { 00:27:35.079 "name": "superblock_version", 00:27:35.079 "value": 5, 00:27:35.079 "read-only": true 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "name": "base_device", 00:27:35.079 "bands": [ 00:27:35.079 { 00:27:35.079 "id": 0, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 1, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 2, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 3, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 4, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 5, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 6, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 7, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 8, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 9, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 10, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 11, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 12, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 13, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 14, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 15, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 16, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 17, 00:27:35.079 "state": "FREE", 00:27:35.079 "validity": 0.0 00:27:35.079 } 00:27:35.079 ], 00:27:35.079 "read-only": true 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "name": "cache_device", 00:27:35.079 "type": "bdev", 00:27:35.079 "chunks": [ 00:27:35.079 { 00:27:35.079 "id": 0, 00:27:35.079 "state": "INACTIVE", 00:27:35.079 "utilization": 0.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 1, 00:27:35.079 "state": "CLOSED", 00:27:35.079 "utilization": 1.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 2, 00:27:35.079 "state": "CLOSED", 00:27:35.079 "utilization": 1.0 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 3, 00:27:35.079 "state": "OPEN", 00:27:35.079 "utilization": 0.001953125 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "id": 4, 00:27:35.079 "state": "OPEN", 00:27:35.079 "utilization": 0.0 00:27:35.079 } 00:27:35.079 ], 00:27:35.079 "read-only": true 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "name": "verbose_mode", 00:27:35.079 "value": true, 00:27:35.079 "unit": "", 00:27:35.079 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:35.079 }, 00:27:35.079 { 00:27:35.079 "name": "prep_upgrade_on_shutdown", 00:27:35.079 "value": false, 00:27:35.079 "unit": "", 00:27:35.079 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:35.079 } 00:27:35.079 ] 00:27:35.079 } 00:27:35.079 11:42:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:35.341 [2024-10-27 11:42:20.412809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.341 [2024-10-27 11:42:20.412981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:35.341 [2024-10-27 11:42:20.413055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:35.341 [2024-10-27 11:42:20.413076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.341 [2024-10-27 11:42:20.413111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.341 [2024-10-27 11:42:20.413128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:35.341 [2024-10-27 11:42:20.413143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:35.341 [2024-10-27 11:42:20.413157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.341 [2024-10-27 11:42:20.413189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.341 [2024-10-27 11:42:20.413204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:35.341 [2024-10-27 11:42:20.413219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.341 [2024-10-27 11:42:20.413264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.341 [2024-10-27 11:42:20.413333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.501 ms, result 0 00:27:35.341 true 00:27:35.341 11:42:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:35.341 11:42:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:35.341 11:42:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.601 11:42:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:35.601 11:42:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:35.601 11:42:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:35.601 [2024-10-27 11:42:20.829146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.601 [2024-10-27 11:42:20.829186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:35.601 [2024-10-27 11:42:20.829194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:35.601 [2024-10-27 11:42:20.829200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.601 [2024-10-27 11:42:20.829228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.601 [2024-10-27 11:42:20.829236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:35.601 [2024-10-27 11:42:20.829242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:35.601 [2024-10-27 11:42:20.829247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.601 [2024-10-27 11:42:20.829262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.601 [2024-10-27 11:42:20.829267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:35.601 [2024-10-27 11:42:20.829273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.601 [2024-10-27 11:42:20.829278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.601 [2024-10-27 11:42:20.829331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.163 ms, result 0 00:27:35.601 true 00:27:35.601 11:42:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.861 { 00:27:35.861 "name": "ftl", 00:27:35.861 "properties": [ 00:27:35.861 { 00:27:35.861 "name": "superblock_version", 00:27:35.861 "value": 5, 00:27:35.861 "read-only": true 00:27:35.861 }, 00:27:35.861 { 00:27:35.861 "name": "base_device", 00:27:35.861 "bands": [ 00:27:35.861 { 00:27:35.861 "id": 0, 00:27:35.861 "state": "FREE", 00:27:35.861 "validity": 0.0 00:27:35.861 }, 00:27:35.861 { 00:27:35.861 "id": 1, 00:27:35.861 "state": "FREE", 00:27:35.861 "validity": 0.0 00:27:35.861 }, 00:27:35.861 { 00:27:35.861 "id": 2, 00:27:35.861 "state": "FREE", 00:27:35.861 "validity": 0.0 00:27:35.861 }, 00:27:35.861 { 00:27:35.861 "id": 3, 00:27:35.861 "state": "FREE", 00:27:35.861 "validity": 0.0 00:27:35.861 }, 00:27:35.861 { 00:27:35.861 "id": 4, 00:27:35.861 "state": "FREE", 00:27:35.861 "validity": 0.0 00:27:35.861 }, 00:27:35.861 { 00:27:35.861 "id": 5, 00:27:35.861 "state": "FREE", 00:27:35.861 "validity": 0.0 00:27:35.861 }, 00:27:35.861 { 00:27:35.861 "id": 6, 00:27:35.861 "state": "FREE", 00:27:35.861 "validity": 0.0 00:27:35.861 }, 00:27:35.861 { 00:27:35.861 "id": 7, 00:27:35.861 "state": "FREE", 00:27:35.862 "validity": 0.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 8, 00:27:35.862 "state": "FREE", 00:27:35.862 "validity": 0.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 9, 00:27:35.862 "state": "FREE", 00:27:35.862 "validity": 0.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 10, 00:27:35.862 "state": "FREE", 00:27:35.862 "validity": 0.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 11, 00:27:35.862 "state": "FREE", 00:27:35.862 "validity": 0.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 12, 00:27:35.862 "state": "FREE", 00:27:35.862 "validity": 0.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 13, 00:27:35.862 "state": "FREE", 00:27:35.862 "validity": 0.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 14, 00:27:35.862 "state": "FREE", 00:27:35.862 "validity": 0.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 15, 00:27:35.862 "state": "FREE", 00:27:35.862 "validity": 0.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 16, 00:27:35.862 "state": "FREE", 00:27:35.862 "validity": 0.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 17, 00:27:35.862 "state": "FREE", 00:27:35.862 "validity": 0.0 00:27:35.862 } 00:27:35.862 ], 00:27:35.862 "read-only": true 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "name": "cache_device", 00:27:35.862 "type": "bdev", 00:27:35.862 "chunks": [ 00:27:35.862 { 00:27:35.862 "id": 0, 00:27:35.862 "state": "INACTIVE", 00:27:35.862 "utilization": 0.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 1, 00:27:35.862 "state": "CLOSED", 00:27:35.862 "utilization": 1.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 2, 00:27:35.862 "state": "CLOSED", 00:27:35.862 "utilization": 1.0 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 3, 00:27:35.862 "state": "OPEN", 00:27:35.862 "utilization": 0.001953125 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "id": 4, 00:27:35.862 "state": "OPEN", 00:27:35.862 "utilization": 0.0 00:27:35.862 } 00:27:35.862 ], 00:27:35.862 "read-only": true 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "name": "verbose_mode", 00:27:35.862 "value": true, 00:27:35.862 "unit": "", 00:27:35.862 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:35.862 }, 00:27:35.862 { 00:27:35.862 "name": "prep_upgrade_on_shutdown", 00:27:35.862 "value": true, 00:27:35.862 "unit": "", 00:27:35.862 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:35.862 } 00:27:35.862 ] 00:27:35.862 } 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80084 ]] 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80084 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80084 ']' 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80084 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80084 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:35.862 killing process with pid 80084 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80084' 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80084 00:27:35.862 11:42:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80084 00:27:36.434 [2024-10-27 11:42:21.601943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:36.434 [2024-10-27 11:42:21.614605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.434 [2024-10-27 11:42:21.614639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:36.434 [2024-10-27 11:42:21.614649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:36.434 [2024-10-27 11:42:21.614655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.434 [2024-10-27 11:42:21.614672] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:36.434 [2024-10-27 11:42:21.616712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.434 [2024-10-27 11:42:21.616738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:36.434 [2024-10-27 11:42:21.616746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.029 ms 00:27:36.434 [2024-10-27 11:42:21.616752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.322349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.322429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:44.571 [2024-10-27 11:42:29.322449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7705.544 ms 00:27:44.571 [2024-10-27 11:42:29.322460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.323882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.323921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:44.571 [2024-10-27 11:42:29.323932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.404 ms 00:27:44.571 [2024-10-27 11:42:29.323941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.325084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.325115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:44.571 [2024-10-27 11:42:29.325127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.110 ms 00:27:44.571 [2024-10-27 11:42:29.325137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.336495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.336543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:44.571 [2024-10-27 11:42:29.336556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.057 ms 00:27:44.571 [2024-10-27 11:42:29.336564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.343369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.343433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:44.571 [2024-10-27 11:42:29.343445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.761 ms 00:27:44.571 [2024-10-27 11:42:29.343455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.343553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.343565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:44.571 [2024-10-27 11:42:29.343575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:44.571 [2024-10-27 11:42:29.343583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.353836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.353876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:44.571 [2024-10-27 11:42:29.353887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.229 ms 00:27:44.571 [2024-10-27 11:42:29.353894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.363758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.363797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:44.571 [2024-10-27 11:42:29.363808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.821 ms 00:27:44.571 [2024-10-27 11:42:29.363815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.373926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.374111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:44.571 [2024-10-27 11:42:29.374130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.070 ms 00:27:44.571 [2024-10-27 11:42:29.374139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.384283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.384333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:44.571 [2024-10-27 11:42:29.384344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.001 ms 00:27:44.571 [2024-10-27 11:42:29.384351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.384394] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:44.571 [2024-10-27 11:42:29.384409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:44.571 [2024-10-27 11:42:29.384420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:44.571 [2024-10-27 11:42:29.384440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:44.571 [2024-10-27 11:42:29.384449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:44.571 [2024-10-27 11:42:29.384570] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:44.571 [2024-10-27 11:42:29.384579] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 08fea97b-55b0-4a3b-8fbf-5c6fef8e948f 00:27:44.571 [2024-10-27 11:42:29.384587] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:44.571 [2024-10-27 11:42:29.384595] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:44.571 [2024-10-27 11:42:29.384602] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:44.571 [2024-10-27 11:42:29.384610] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:44.571 [2024-10-27 11:42:29.384618] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:44.571 [2024-10-27 11:42:29.384626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:44.571 [2024-10-27 11:42:29.384634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:44.571 [2024-10-27 11:42:29.384640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:44.571 [2024-10-27 11:42:29.384648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:44.571 [2024-10-27 11:42:29.384657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.384669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:44.571 [2024-10-27 11:42:29.384682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:27:44.571 [2024-10-27 11:42:29.384690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.398816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.398978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:44.571 [2024-10-27 11:42:29.399047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.107 ms 00:27:44.571 [2024-10-27 11:42:29.399071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.399509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.571 [2024-10-27 11:42:29.399547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:44.571 [2024-10-27 11:42:29.399620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.398 ms 00:27:44.571 [2024-10-27 11:42:29.399644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.445985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.571 [2024-10-27 11:42:29.446154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:44.571 [2024-10-27 11:42:29.446216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.571 [2024-10-27 11:42:29.446240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.446311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.571 [2024-10-27 11:42:29.446335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:44.571 [2024-10-27 11:42:29.446355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.571 [2024-10-27 11:42:29.446375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.446468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.571 [2024-10-27 11:42:29.446495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:44.571 [2024-10-27 11:42:29.446517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.571 [2024-10-27 11:42:29.446582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.446616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.571 [2024-10-27 11:42:29.446644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:44.571 [2024-10-27 11:42:29.446665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.571 [2024-10-27 11:42:29.446685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.531916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.571 [2024-10-27 11:42:29.532122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:44.571 [2024-10-27 11:42:29.532182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.571 [2024-10-27 11:42:29.532205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.601690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.571 [2024-10-27 11:42:29.601745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:44.571 [2024-10-27 11:42:29.601758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.571 [2024-10-27 11:42:29.601766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.601864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.571 [2024-10-27 11:42:29.601875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:44.571 [2024-10-27 11:42:29.601883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.571 [2024-10-27 11:42:29.601892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.601937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.571 [2024-10-27 11:42:29.601946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:44.571 [2024-10-27 11:42:29.601961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.571 [2024-10-27 11:42:29.601970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.602068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.571 [2024-10-27 11:42:29.602079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:44.571 [2024-10-27 11:42:29.602088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.571 [2024-10-27 11:42:29.602097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.602129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.571 [2024-10-27 11:42:29.602139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:44.571 [2024-10-27 11:42:29.602147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.571 [2024-10-27 11:42:29.602159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.571 [2024-10-27 11:42:29.602200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.571 [2024-10-27 11:42:29.602210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:44.572 [2024-10-27 11:42:29.602218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.572 [2024-10-27 11:42:29.602226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.572 [2024-10-27 11:42:29.602276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.572 [2024-10-27 11:42:29.602287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:44.572 [2024-10-27 11:42:29.602329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.572 [2024-10-27 11:42:29.602339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.572 [2024-10-27 11:42:29.602473] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7987.801 ms, result 0 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80613 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80613 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80613 ']' 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:48.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:48.780 11:42:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:48.780 [2024-10-27 11:42:33.713893] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:27:48.780 [2024-10-27 11:42:33.714378] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80613 ] 00:27:48.780 [2024-10-27 11:42:33.883163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.780 [2024-10-27 11:42:34.011166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.724 [2024-10-27 11:42:34.799218] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:49.724 [2024-10-27 11:42:34.799497] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:49.724 [2024-10-27 11:42:34.953132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.724 [2024-10-27 11:42:34.953379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:49.724 [2024-10-27 11:42:34.953657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:49.724 [2024-10-27 11:42:34.953702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.724 [2024-10-27 11:42:34.953800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.724 [2024-10-27 11:42:34.953828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:49.724 [2024-10-27 11:42:34.953849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:49.724 [2024-10-27 11:42:34.953868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.724 [2024-10-27 11:42:34.953911] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:49.724 [2024-10-27 11:42:34.954751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:49.724 [2024-10-27 11:42:34.954995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.724 [2024-10-27 11:42:34.955058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:49.724 [2024-10-27 11:42:34.955085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.092 ms 00:27:49.724 [2024-10-27 11:42:34.955104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.724 [2024-10-27 11:42:34.956832] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:49.724 [2024-10-27 11:42:34.971026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.724 [2024-10-27 11:42:34.971072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:49.724 [2024-10-27 11:42:34.971086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.196 ms 00:27:49.724 [2024-10-27 11:42:34.971101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.725 [2024-10-27 11:42:34.971175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.725 [2024-10-27 11:42:34.971185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:49.725 [2024-10-27 11:42:34.971194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:49.725 [2024-10-27 11:42:34.971203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.725 [2024-10-27 11:42:34.979393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.725 [2024-10-27 11:42:34.979441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:49.725 [2024-10-27 11:42:34.979452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.103 ms 00:27:49.725 [2024-10-27 11:42:34.979460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.725 [2024-10-27 11:42:34.979526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.725 [2024-10-27 11:42:34.979536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:49.725 [2024-10-27 11:42:34.979544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:49.725 [2024-10-27 11:42:34.979552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.725 [2024-10-27 11:42:34.979598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.725 [2024-10-27 11:42:34.979610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:49.725 [2024-10-27 11:42:34.979619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:49.725 [2024-10-27 11:42:34.979630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.725 [2024-10-27 11:42:34.979658] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:49.725 [2024-10-27 11:42:34.983897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.725 [2024-10-27 11:42:34.983955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:49.725 [2024-10-27 11:42:34.983967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.245 ms 00:27:49.725 [2024-10-27 11:42:34.983975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.725 [2024-10-27 11:42:34.984008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.725 [2024-10-27 11:42:34.984017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:49.725 [2024-10-27 11:42:34.984025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:49.725 [2024-10-27 11:42:34.984033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.725 [2024-10-27 11:42:34.984088] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:49.725 [2024-10-27 11:42:34.984112] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:49.725 [2024-10-27 11:42:34.984153] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:49.725 [2024-10-27 11:42:34.984169] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:49.725 [2024-10-27 11:42:34.984276] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:49.725 [2024-10-27 11:42:34.984287] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:49.725 [2024-10-27 11:42:34.984321] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:49.725 [2024-10-27 11:42:34.984332] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:49.725 [2024-10-27 11:42:34.984341] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:49.725 [2024-10-27 11:42:34.984348] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:49.725 [2024-10-27 11:42:34.984359] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:49.725 [2024-10-27 11:42:34.984367] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:49.725 [2024-10-27 11:42:34.984375] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:49.725 [2024-10-27 11:42:34.984383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.725 [2024-10-27 11:42:34.984391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:49.725 [2024-10-27 11:42:34.984399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.298 ms 00:27:49.725 [2024-10-27 11:42:34.984406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.725 [2024-10-27 11:42:34.984490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.725 [2024-10-27 11:42:34.984500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:49.725 [2024-10-27 11:42:34.984508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:49.725 [2024-10-27 11:42:34.984518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.725 [2024-10-27 11:42:34.984622] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:49.725 [2024-10-27 11:42:34.984632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:49.725 [2024-10-27 11:42:34.984643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:49.725 [2024-10-27 11:42:34.984652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:49.725 [2024-10-27 11:42:34.984666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:49.725 [2024-10-27 11:42:34.984681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:49.725 [2024-10-27 11:42:34.984690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:49.725 [2024-10-27 11:42:34.984700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:49.725 [2024-10-27 11:42:34.984714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:49.725 [2024-10-27 11:42:34.984721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:49.725 [2024-10-27 11:42:34.984737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:49.725 [2024-10-27 11:42:34.984745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:49.725 [2024-10-27 11:42:34.984759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:49.725 [2024-10-27 11:42:34.984767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:49.725 [2024-10-27 11:42:34.984780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:49.725 [2024-10-27 11:42:34.984787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.725 [2024-10-27 11:42:34.984793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:49.725 [2024-10-27 11:42:34.984800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:49.725 [2024-10-27 11:42:34.984806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.725 [2024-10-27 11:42:34.984820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:49.725 [2024-10-27 11:42:34.984827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:49.725 [2024-10-27 11:42:34.984833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.725 [2024-10-27 11:42:34.984840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:49.725 [2024-10-27 11:42:34.984846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:49.725 [2024-10-27 11:42:34.984853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:49.725 [2024-10-27 11:42:34.984859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:49.725 [2024-10-27 11:42:34.984865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:49.725 [2024-10-27 11:42:34.984872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:49.725 [2024-10-27 11:42:34.984885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:49.725 [2024-10-27 11:42:34.984891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:49.725 [2024-10-27 11:42:34.984904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:49.725 [2024-10-27 11:42:34.984923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:49.725 [2024-10-27 11:42:34.984930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984937] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:49.725 [2024-10-27 11:42:34.984945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:49.725 [2024-10-27 11:42:34.984955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:49.725 [2024-10-27 11:42:34.984962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:49.725 [2024-10-27 11:42:34.984971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:49.725 [2024-10-27 11:42:34.984978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:49.725 [2024-10-27 11:42:34.984986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:49.725 [2024-10-27 11:42:34.984993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:49.725 [2024-10-27 11:42:34.985000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:49.725 [2024-10-27 11:42:34.985006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:49.725 [2024-10-27 11:42:34.985015] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:49.725 [2024-10-27 11:42:34.985028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.725 [2024-10-27 11:42:34.985036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:49.725 [2024-10-27 11:42:34.985044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:49.726 [2024-10-27 11:42:34.985050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:49.726 [2024-10-27 11:42:34.985057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:49.726 [2024-10-27 11:42:34.985064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:49.726 [2024-10-27 11:42:34.985071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:49.726 [2024-10-27 11:42:34.985078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:49.726 [2024-10-27 11:42:34.985085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:49.726 [2024-10-27 11:42:34.985092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:49.726 [2024-10-27 11:42:34.985099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:49.726 [2024-10-27 11:42:34.985106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:49.726 [2024-10-27 11:42:34.985112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:49.726 [2024-10-27 11:42:34.985120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:49.726 [2024-10-27 11:42:34.985126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:49.726 [2024-10-27 11:42:34.985133] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:49.726 [2024-10-27 11:42:34.985142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:49.726 [2024-10-27 11:42:34.985150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:49.726 [2024-10-27 11:42:34.985157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:49.726 [2024-10-27 11:42:34.985164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:49.726 [2024-10-27 11:42:34.985187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:49.726 [2024-10-27 11:42:34.985195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.726 [2024-10-27 11:42:34.985202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:49.726 [2024-10-27 11:42:34.985212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.642 ms 00:27:49.726 [2024-10-27 11:42:34.985220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.726 [2024-10-27 11:42:34.985265] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:49.726 [2024-10-27 11:42:34.985277] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:53.936 [2024-10-27 11:42:39.030562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.030640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:53.936 [2024-10-27 11:42:39.030659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4045.279 ms 00:27:53.936 [2024-10-27 11:42:39.030668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.062679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.062741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:53.936 [2024-10-27 11:42:39.062755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.755 ms 00:27:53.936 [2024-10-27 11:42:39.062765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.062863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.062875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:53.936 [2024-10-27 11:42:39.062892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:53.936 [2024-10-27 11:42:39.062901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.098427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.098477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:53.936 [2024-10-27 11:42:39.098490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.470 ms 00:27:53.936 [2024-10-27 11:42:39.098498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.098542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.098552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:53.936 [2024-10-27 11:42:39.098561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:53.936 [2024-10-27 11:42:39.098570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.099151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.099189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:53.936 [2024-10-27 11:42:39.099200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.524 ms 00:27:53.936 [2024-10-27 11:42:39.099209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.099262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.099271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:53.936 [2024-10-27 11:42:39.099280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:53.936 [2024-10-27 11:42:39.099288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.116868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.116914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:53.936 [2024-10-27 11:42:39.116927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.528 ms 00:27:53.936 [2024-10-27 11:42:39.116936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.131532] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:53.936 [2024-10-27 11:42:39.131587] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:53.936 [2024-10-27 11:42:39.131601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.131610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:53.936 [2024-10-27 11:42:39.131619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.539 ms 00:27:53.936 [2024-10-27 11:42:39.131627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.147131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.147195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:53.936 [2024-10-27 11:42:39.147207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.445 ms 00:27:53.936 [2024-10-27 11:42:39.147216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.160217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.160264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:53.936 [2024-10-27 11:42:39.160277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.941 ms 00:27:53.936 [2024-10-27 11:42:39.160284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.173038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.173085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:53.936 [2024-10-27 11:42:39.173097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.692 ms 00:27:53.936 [2024-10-27 11:42:39.173104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.936 [2024-10-27 11:42:39.173793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.936 [2024-10-27 11:42:39.173823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:53.936 [2024-10-27 11:42:39.173838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.552 ms 00:27:53.936 [2024-10-27 11:42:39.173846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.197 [2024-10-27 11:42:39.252713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.197 [2024-10-27 11:42:39.252790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:54.197 [2024-10-27 11:42:39.252806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 78.842 ms 00:27:54.197 [2024-10-27 11:42:39.252816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.197 [2024-10-27 11:42:39.263996] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:54.197 [2024-10-27 11:42:39.265284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.197 [2024-10-27 11:42:39.265345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:54.197 [2024-10-27 11:42:39.265358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.399 ms 00:27:54.197 [2024-10-27 11:42:39.265366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.197 [2024-10-27 11:42:39.265470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.197 [2024-10-27 11:42:39.265482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:54.197 [2024-10-27 11:42:39.265496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:54.197 [2024-10-27 11:42:39.265505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.197 [2024-10-27 11:42:39.265585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.197 [2024-10-27 11:42:39.265597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:54.197 [2024-10-27 11:42:39.265606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:54.197 [2024-10-27 11:42:39.265615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.197 [2024-10-27 11:42:39.265642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.197 [2024-10-27 11:42:39.265651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:54.197 [2024-10-27 11:42:39.265660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:54.197 [2024-10-27 11:42:39.265671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.197 [2024-10-27 11:42:39.265704] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:54.197 [2024-10-27 11:42:39.265715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.197 [2024-10-27 11:42:39.265723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:54.197 [2024-10-27 11:42:39.265731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:54.197 [2024-10-27 11:42:39.265740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.197 [2024-10-27 11:42:39.291528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.197 [2024-10-27 11:42:39.291580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:54.197 [2024-10-27 11:42:39.291600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.762 ms 00:27:54.197 [2024-10-27 11:42:39.291609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.197 [2024-10-27 11:42:39.291703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.197 [2024-10-27 11:42:39.291716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:54.197 [2024-10-27 11:42:39.291725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:54.197 [2024-10-27 11:42:39.291733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.197 [2024-10-27 11:42:39.293030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4339.392 ms, result 0 00:27:54.197 [2024-10-27 11:42:39.307963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.197 [2024-10-27 11:42:39.323981] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:54.197 [2024-10-27 11:42:39.332362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:54.458 11:42:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:54.458 11:42:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:54.458 11:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:54.458 11:42:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:54.458 11:42:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:54.718 [2024-10-27 11:42:39.916748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.718 [2024-10-27 11:42:39.916809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:54.718 [2024-10-27 11:42:39.916825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:54.718 [2024-10-27 11:42:39.916833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.718 [2024-10-27 11:42:39.916862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.718 [2024-10-27 11:42:39.916871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:54.718 [2024-10-27 11:42:39.916880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:54.718 [2024-10-27 11:42:39.916888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.718 [2024-10-27 11:42:39.916909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.718 [2024-10-27 11:42:39.916918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:54.718 [2024-10-27 11:42:39.916926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:54.718 [2024-10-27 11:42:39.916935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.718 [2024-10-27 11:42:39.916997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.243 ms, result 0 00:27:54.718 true 00:27:54.718 11:42:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:54.978 { 00:27:54.978 "name": "ftl", 00:27:54.978 "properties": [ 00:27:54.978 { 00:27:54.978 "name": "superblock_version", 00:27:54.978 "value": 5, 00:27:54.978 "read-only": true 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "name": "base_device", 00:27:54.978 "bands": [ 00:27:54.978 { 00:27:54.978 "id": 0, 00:27:54.978 "state": "CLOSED", 00:27:54.978 "validity": 1.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 1, 00:27:54.978 "state": "CLOSED", 00:27:54.978 "validity": 1.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 2, 00:27:54.978 "state": "CLOSED", 00:27:54.978 "validity": 0.007843137254901933 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 3, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 4, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 5, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 6, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 7, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 8, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 9, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 10, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 11, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 12, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 13, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 14, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 15, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 16, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "id": 17, 00:27:54.978 "state": "FREE", 00:27:54.978 "validity": 0.0 00:27:54.978 } 00:27:54.978 ], 00:27:54.978 "read-only": true 00:27:54.978 }, 00:27:54.978 { 00:27:54.978 "name": "cache_device", 00:27:54.978 "type": "bdev", 00:27:54.978 "chunks": [ 00:27:54.978 { 00:27:54.978 "id": 0, 00:27:54.979 "state": "INACTIVE", 00:27:54.979 "utilization": 0.0 00:27:54.979 }, 00:27:54.979 { 00:27:54.979 "id": 1, 00:27:54.979 "state": "OPEN", 00:27:54.979 "utilization": 0.0 00:27:54.979 }, 00:27:54.979 { 00:27:54.979 "id": 2, 00:27:54.979 "state": "OPEN", 00:27:54.979 "utilization": 0.0 00:27:54.979 }, 00:27:54.979 { 00:27:54.979 "id": 3, 00:27:54.979 "state": "FREE", 00:27:54.979 "utilization": 0.0 00:27:54.979 }, 00:27:54.979 { 00:27:54.979 "id": 4, 00:27:54.979 "state": "FREE", 00:27:54.979 "utilization": 0.0 00:27:54.979 } 00:27:54.979 ], 00:27:54.979 "read-only": true 00:27:54.979 }, 00:27:54.979 { 00:27:54.979 "name": "verbose_mode", 00:27:54.979 "value": true, 00:27:54.979 "unit": "", 00:27:54.979 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:54.979 }, 00:27:54.979 { 00:27:54.979 "name": "prep_upgrade_on_shutdown", 00:27:54.979 "value": false, 00:27:54.979 "unit": "", 00:27:54.979 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:54.979 } 00:27:54.979 ] 00:27:54.979 } 00:27:54.979 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:54.979 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:54.979 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:55.240 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:55.240 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:55.240 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:55.240 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:55.240 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:55.502 Validate MD5 checksum, iteration 1 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:55.502 11:42:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:55.502 [2024-10-27 11:42:40.664088] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:27:55.502 [2024-10-27 11:42:40.664200] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80707 ] 00:27:55.762 [2024-10-27 11:42:40.824413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.762 [2024-10-27 11:42:40.929906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.676  [2024-10-27T11:42:43.557Z] Copying: 552/1024 [MB] (552 MBps) [2024-10-27T11:42:44.494Z] Copying: 1024/1024 [MB] (average 558 MBps) 00:27:59.213 00:27:59.213 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:59.213 11:42:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:01.759 Validate MD5 checksum, iteration 2 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fa69ca03a09dbdd860c28cd53ca01ae3 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fa69ca03a09dbdd860c28cd53ca01ae3 != \f\a\6\9\c\a\0\3\a\0\9\d\b\d\d\8\6\0\c\2\8\c\d\5\3\c\a\0\1\a\e\3 ]] 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:01.759 11:42:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:01.759 [2024-10-27 11:42:46.692748] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:28:01.759 [2024-10-27 11:42:46.692873] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80774 ] 00:28:01.759 [2024-10-27 11:42:46.850876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.759 [2024-10-27 11:42:46.938136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.134  [2024-10-27T11:42:49.048Z] Copying: 781/1024 [MB] (781 MBps) [2024-10-27T11:42:49.617Z] Copying: 1024/1024 [MB] (average 730 MBps) 00:28:04.336 00:28:04.594 11:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:04.594 11:42:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=66d30c913d3428045dcc000bb5627238 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 66d30c913d3428045dcc000bb5627238 != \6\6\d\3\0\c\9\1\3\d\3\4\2\8\0\4\5\d\c\c\0\0\0\b\b\5\6\2\7\2\3\8 ]] 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80613 ]] 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80613 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:06.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80833 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80833 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80833 ']' 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:06.497 11:42:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:06.497 [2024-10-27 11:42:51.666439] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:28:06.497 [2024-10-27 11:42:51.666575] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80833 ] 00:28:06.756 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 80613 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:06.756 [2024-10-27 11:42:51.830496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.756 [2024-10-27 11:42:51.950819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.700 [2024-10-27 11:42:52.718880] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:07.700 [2024-10-27 11:42:52.718949] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:07.700 [2024-10-27 11:42:52.871812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.700 [2024-10-27 11:42:52.871870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:07.700 [2024-10-27 11:42:52.871886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:07.700 [2024-10-27 11:42:52.871895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.700 [2024-10-27 11:42:52.871955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.700 [2024-10-27 11:42:52.871966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:07.700 [2024-10-27 11:42:52.871976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:28:07.700 [2024-10-27 11:42:52.871984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.700 [2024-10-27 11:42:52.872011] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:07.700 [2024-10-27 11:42:52.872820] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:07.700 [2024-10-27 11:42:52.872855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.700 [2024-10-27 11:42:52.872865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:07.700 [2024-10-27 11:42:52.872874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.854 ms 00:28:07.700 [2024-10-27 11:42:52.872882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.700 [2024-10-27 11:42:52.873221] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:07.700 [2024-10-27 11:42:52.892120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.700 [2024-10-27 11:42:52.892392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:07.700 [2024-10-27 11:42:52.892425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.894 ms 00:28:07.700 [2024-10-27 11:42:52.892434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.700 [2024-10-27 11:42:52.902188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.700 [2024-10-27 11:42:52.902233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:07.700 [2024-10-27 11:42:52.902247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:07.700 [2024-10-27 11:42:52.902255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.700 [2024-10-27 11:42:52.902637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.700 [2024-10-27 11:42:52.902650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:07.700 [2024-10-27 11:42:52.902661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:28:07.700 [2024-10-27 11:42:52.902668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.700 [2024-10-27 11:42:52.902724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.700 [2024-10-27 11:42:52.902738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:07.700 [2024-10-27 11:42:52.902747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:28:07.700 [2024-10-27 11:42:52.902755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.700 [2024-10-27 11:42:52.902781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.700 [2024-10-27 11:42:52.902790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:07.700 [2024-10-27 11:42:52.902798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:07.700 [2024-10-27 11:42:52.902806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.700 [2024-10-27 11:42:52.902826] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:07.700 [2024-10-27 11:42:52.906232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.700 [2024-10-27 11:42:52.906272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:07.700 [2024-10-27 11:42:52.906283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.410 ms 00:28:07.700 [2024-10-27 11:42:52.906291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.700 [2024-10-27 11:42:52.906337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.700 [2024-10-27 11:42:52.906350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:07.700 [2024-10-27 11:42:52.906359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:07.700 [2024-10-27 11:42:52.906367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.700 [2024-10-27 11:42:52.906403] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:07.700 [2024-10-27 11:42:52.906426] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:07.700 [2024-10-27 11:42:52.906463] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:07.700 [2024-10-27 11:42:52.906478] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:07.700 [2024-10-27 11:42:52.906587] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:07.700 [2024-10-27 11:42:52.906599] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:07.700 [2024-10-27 11:42:52.906610] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:07.700 [2024-10-27 11:42:52.906620] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:07.700 [2024-10-27 11:42:52.906630] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:07.700 [2024-10-27 11:42:52.906638] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:07.700 [2024-10-27 11:42:52.906646] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:07.700 [2024-10-27 11:42:52.906654] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:07.700 [2024-10-27 11:42:52.906661] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:07.700 [2024-10-27 11:42:52.906670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.700 [2024-10-27 11:42:52.906678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:07.700 [2024-10-27 11:42:52.906689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:28:07.700 [2024-10-27 11:42:52.906696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.700 [2024-10-27 11:42:52.906781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.701 [2024-10-27 11:42:52.906789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:07.701 [2024-10-27 11:42:52.906797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:28:07.701 [2024-10-27 11:42:52.906805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.701 [2024-10-27 11:42:52.906908] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:07.701 [2024-10-27 11:42:52.906918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:07.701 [2024-10-27 11:42:52.906927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:07.701 [2024-10-27 11:42:52.906938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.701 [2024-10-27 11:42:52.906947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:07.701 [2024-10-27 11:42:52.906954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:07.701 [2024-10-27 11:42:52.906960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:07.701 [2024-10-27 11:42:52.906967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:07.701 [2024-10-27 11:42:52.906976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:07.701 [2024-10-27 11:42:52.906983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.701 [2024-10-27 11:42:52.906989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:07.701 [2024-10-27 11:42:52.906996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:07.701 [2024-10-27 11:42:52.907003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.701 [2024-10-27 11:42:52.907010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:07.701 [2024-10-27 11:42:52.907017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:07.701 [2024-10-27 11:42:52.907026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.701 [2024-10-27 11:42:52.907033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:07.701 [2024-10-27 11:42:52.907039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:07.701 [2024-10-27 11:42:52.907046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.701 [2024-10-27 11:42:52.907053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:07.701 [2024-10-27 11:42:52.907060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:07.701 [2024-10-27 11:42:52.907067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.701 [2024-10-27 11:42:52.907073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:07.701 [2024-10-27 11:42:52.907087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:07.701 [2024-10-27 11:42:52.907095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.701 [2024-10-27 11:42:52.907101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:07.701 [2024-10-27 11:42:52.907108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:07.701 [2024-10-27 11:42:52.907114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.701 [2024-10-27 11:42:52.907121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:07.701 [2024-10-27 11:42:52.907127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:07.701 [2024-10-27 11:42:52.907134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.701 [2024-10-27 11:42:52.907141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:07.701 [2024-10-27 11:42:52.907148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:07.701 [2024-10-27 11:42:52.907155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.701 [2024-10-27 11:42:52.907161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:07.701 [2024-10-27 11:42:52.907169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:07.701 [2024-10-27 11:42:52.907175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.701 [2024-10-27 11:42:52.907182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:07.701 [2024-10-27 11:42:52.907189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:07.701 [2024-10-27 11:42:52.907195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.701 [2024-10-27 11:42:52.907202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:07.701 [2024-10-27 11:42:52.907208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:07.701 [2024-10-27 11:42:52.907215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.701 [2024-10-27 11:42:52.907221] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:07.701 [2024-10-27 11:42:52.907229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:07.701 [2024-10-27 11:42:52.907237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:07.701 [2024-10-27 11:42:52.907245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.701 [2024-10-27 11:42:52.907255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:07.701 [2024-10-27 11:42:52.907263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:07.701 [2024-10-27 11:42:52.907270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:07.701 [2024-10-27 11:42:52.907277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:07.701 [2024-10-27 11:42:52.907284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:07.701 [2024-10-27 11:42:52.907305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:07.701 [2024-10-27 11:42:52.907315] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:07.701 [2024-10-27 11:42:52.907325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.701 [2024-10-27 11:42:52.907334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:07.701 [2024-10-27 11:42:52.907342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:07.701 [2024-10-27 11:42:52.907350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:07.701 [2024-10-27 11:42:52.907357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:07.701 [2024-10-27 11:42:52.907364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:07.701 [2024-10-27 11:42:52.907372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:07.701 [2024-10-27 11:42:52.907379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:07.701 [2024-10-27 11:42:52.907388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:07.701 [2024-10-27 11:42:52.907395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:07.701 [2024-10-27 11:42:52.907403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:07.701 [2024-10-27 11:42:52.907410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:07.701 [2024-10-27 11:42:52.907417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:07.701 [2024-10-27 11:42:52.907424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:07.701 [2024-10-27 11:42:52.907432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:07.701 [2024-10-27 11:42:52.907440] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:07.701 [2024-10-27 11:42:52.907449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.701 [2024-10-27 11:42:52.907457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:07.701 [2024-10-27 11:42:52.907465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:07.701 [2024-10-27 11:42:52.907473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:07.701 [2024-10-27 11:42:52.907481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:07.701 [2024-10-27 11:42:52.907489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.701 [2024-10-27 11:42:52.907500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:07.701 [2024-10-27 11:42:52.907507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.650 ms 00:28:07.701 [2024-10-27 11:42:52.907515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.701 [2024-10-27 11:42:52.936851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.701 [2024-10-27 11:42:52.936900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:07.702 [2024-10-27 11:42:52.936912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.277 ms 00:28:07.702 [2024-10-27 11:42:52.936920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.702 [2024-10-27 11:42:52.936964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.702 [2024-10-27 11:42:52.936973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:07.702 [2024-10-27 11:42:52.936982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:07.702 [2024-10-27 11:42:52.936990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.702 [2024-10-27 11:42:52.972411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.702 [2024-10-27 11:42:52.972455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:07.702 [2024-10-27 11:42:52.972467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.359 ms 00:28:07.702 [2024-10-27 11:42:52.972476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.702 [2024-10-27 11:42:52.972521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.702 [2024-10-27 11:42:52.972530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:07.702 [2024-10-27 11:42:52.972539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:07.702 [2024-10-27 11:42:52.972548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.702 [2024-10-27 11:42:52.972675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.702 [2024-10-27 11:42:52.972693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:07.702 [2024-10-27 11:42:52.972702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:07.702 [2024-10-27 11:42:52.972710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.702 [2024-10-27 11:42:52.972759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.702 [2024-10-27 11:42:52.972768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:07.702 [2024-10-27 11:42:52.972781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:07.702 [2024-10-27 11:42:52.972789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.962 [2024-10-27 11:42:52.990642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.962 [2024-10-27 11:42:52.990686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:07.962 [2024-10-27 11:42:52.990697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.832 ms 00:28:07.962 [2024-10-27 11:42:52.990706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.962 [2024-10-27 11:42:52.990823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.962 [2024-10-27 11:42:52.990835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:07.962 [2024-10-27 11:42:52.990844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:07.962 [2024-10-27 11:42:52.990853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.962 [2024-10-27 11:42:53.019829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.962 [2024-10-27 11:42:53.020029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:07.962 [2024-10-27 11:42:53.020054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.954 ms 00:28:07.962 [2024-10-27 11:42:53.020063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.962 [2024-10-27 11:42:53.030318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.962 [2024-10-27 11:42:53.030477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:07.962 [2024-10-27 11:42:53.030498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:28:07.962 [2024-10-27 11:42:53.030518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.962 [2024-10-27 11:42:53.095726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.962 [2024-10-27 11:42:53.095789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:07.962 [2024-10-27 11:42:53.095811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 65.135 ms 00:28:07.962 [2024-10-27 11:42:53.095820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.962 [2024-10-27 11:42:53.095992] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:07.962 [2024-10-27 11:42:53.096109] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:07.962 [2024-10-27 11:42:53.096223] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:07.962 [2024-10-27 11:42:53.096359] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:07.962 [2024-10-27 11:42:53.096372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.962 [2024-10-27 11:42:53.096383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:07.962 [2024-10-27 11:42:53.096394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.486 ms 00:28:07.962 [2024-10-27 11:42:53.096402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.962 [2024-10-27 11:42:53.096495] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:07.962 [2024-10-27 11:42:53.096508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.962 [2024-10-27 11:42:53.096517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:07.962 [2024-10-27 11:42:53.096530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:07.962 [2024-10-27 11:42:53.096540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.962 [2024-10-27 11:42:53.113367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.962 [2024-10-27 11:42:53.113552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:07.962 [2024-10-27 11:42:53.113581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.801 ms 00:28:07.962 [2024-10-27 11:42:53.113590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.962 [2024-10-27 11:42:53.122643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.962 [2024-10-27 11:42:53.122688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:07.962 [2024-10-27 11:42:53.122699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:07.962 [2024-10-27 11:42:53.122708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.962 [2024-10-27 11:42:53.122813] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:07.962 [2024-10-27 11:42:53.123037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.962 [2024-10-27 11:42:53.123056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:07.962 [2024-10-27 11:42:53.123066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.225 ms 00:28:07.962 [2024-10-27 11:42:53.123074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.906 [2024-10-27 11:42:53.867737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.906 [2024-10-27 11:42:53.868077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:08.906 [2024-10-27 11:42:53.868107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 743.613 ms 00:28:08.906 [2024-10-27 11:42:53.868117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.906 [2024-10-27 11:42:53.872903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.907 [2024-10-27 11:42:53.872953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:08.907 [2024-10-27 11:42:53.872967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.535 ms 00:28:08.907 [2024-10-27 11:42:53.872975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.907 [2024-10-27 11:42:53.873971] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:08.907 [2024-10-27 11:42:53.874030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.907 [2024-10-27 11:42:53.874040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:08.907 [2024-10-27 11:42:53.874050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.020 ms 00:28:08.907 [2024-10-27 11:42:53.874059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.907 [2024-10-27 11:42:53.874101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.907 [2024-10-27 11:42:53.874111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:08.907 [2024-10-27 11:42:53.874121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:08.907 [2024-10-27 11:42:53.874129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.907 [2024-10-27 11:42:53.874171] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 751.358 ms, result 0 00:28:08.907 [2024-10-27 11:42:53.874213] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:08.907 [2024-10-27 11:42:53.874370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.907 [2024-10-27 11:42:53.874387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:08.907 [2024-10-27 11:42:53.874397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.158 ms 00:28:08.907 [2024-10-27 11:42:53.874405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.481 [2024-10-27 11:42:54.492834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.481 [2024-10-27 11:42:54.492918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:09.481 [2024-10-27 11:42:54.492936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 617.225 ms 00:28:09.481 [2024-10-27 11:42:54.492946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.481 [2024-10-27 11:42:54.497859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.481 [2024-10-27 11:42:54.497908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:09.481 [2024-10-27 11:42:54.497920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.654 ms 00:28:09.481 [2024-10-27 11:42:54.497928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.481 [2024-10-27 11:42:54.498864] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:09.481 [2024-10-27 11:42:54.498911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.481 [2024-10-27 11:42:54.498921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:09.481 [2024-10-27 11:42:54.498931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.949 ms 00:28:09.481 [2024-10-27 11:42:54.498939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.481 [2024-10-27 11:42:54.498976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.481 [2024-10-27 11:42:54.498986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:09.481 [2024-10-27 11:42:54.498995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:09.481 [2024-10-27 11:42:54.499003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.481 [2024-10-27 11:42:54.499044] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 624.821 ms, result 0 00:28:09.481 [2024-10-27 11:42:54.499090] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:09.481 [2024-10-27 11:42:54.499103] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:09.481 [2024-10-27 11:42:54.499113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.481 [2024-10-27 11:42:54.499122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:09.481 [2024-10-27 11:42:54.499131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1376.324 ms 00:28:09.481 [2024-10-27 11:42:54.499139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.481 [2024-10-27 11:42:54.499170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.481 [2024-10-27 11:42:54.499180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:09.481 [2024-10-27 11:42:54.499193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:09.481 [2024-10-27 11:42:54.499201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.481 [2024-10-27 11:42:54.511560] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:09.481 [2024-10-27 11:42:54.511707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.481 [2024-10-27 11:42:54.511721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:09.481 [2024-10-27 11:42:54.511731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.489 ms 00:28:09.481 [2024-10-27 11:42:54.511739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.481 [2024-10-27 11:42:54.512479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.481 [2024-10-27 11:42:54.512499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:09.482 [2024-10-27 11:42:54.512510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.657 ms 00:28:09.482 [2024-10-27 11:42:54.512521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.482 [2024-10-27 11:42:54.514757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.482 [2024-10-27 11:42:54.514782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:09.482 [2024-10-27 11:42:54.514793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.218 ms 00:28:09.482 [2024-10-27 11:42:54.514802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.482 [2024-10-27 11:42:54.514843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.482 [2024-10-27 11:42:54.514851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:09.482 [2024-10-27 11:42:54.514860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:09.482 [2024-10-27 11:42:54.514868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.482 [2024-10-27 11:42:54.514986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.482 [2024-10-27 11:42:54.514997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:09.482 [2024-10-27 11:42:54.515005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:09.482 [2024-10-27 11:42:54.515013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.482 [2024-10-27 11:42:54.515034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.482 [2024-10-27 11:42:54.515043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:09.482 [2024-10-27 11:42:54.515051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:09.482 [2024-10-27 11:42:54.515059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.482 [2024-10-27 11:42:54.515088] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:09.482 [2024-10-27 11:42:54.515101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.482 [2024-10-27 11:42:54.515109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:09.482 [2024-10-27 11:42:54.515118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:09.482 [2024-10-27 11:42:54.515125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.482 [2024-10-27 11:42:54.515180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.482 [2024-10-27 11:42:54.515191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:09.482 [2024-10-27 11:42:54.515199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:09.482 [2024-10-27 11:42:54.515207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.482 [2024-10-27 11:42:54.516410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1644.042 ms, result 0 00:28:09.482 [2024-10-27 11:42:54.532083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.482 [2024-10-27 11:42:54.548081] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:09.482 [2024-10-27 11:42:54.557078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:09.482 Validate MD5 checksum, iteration 1 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:09.482 11:42:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:09.482 [2024-10-27 11:42:54.660241] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:28:09.482 [2024-10-27 11:42:54.660689] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80872 ] 00:28:09.743 [2024-10-27 11:42:54.816753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.743 [2024-10-27 11:42:54.962098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.660  [2024-10-27T11:42:57.512Z] Copying: 530/1024 [MB] (530 MBps) [2024-10-27T11:43:02.798Z] Copying: 1024/1024 [MB] (average 571 MBps) 00:28:17.517 00:28:17.517 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:17.517 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:18.901 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:18.901 Validate MD5 checksum, iteration 2 00:28:18.901 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fa69ca03a09dbdd860c28cd53ca01ae3 00:28:18.901 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fa69ca03a09dbdd860c28cd53ca01ae3 != \f\a\6\9\c\a\0\3\a\0\9\d\b\d\d\8\6\0\c\2\8\c\d\5\3\c\a\0\1\a\e\3 ]] 00:28:18.901 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:18.901 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:18.901 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:18.901 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:18.901 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:18.901 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:18.902 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:18.902 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:18.902 11:43:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:19.161 [2024-10-27 11:43:04.200093] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:28:19.161 [2024-10-27 11:43:04.200221] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80973 ] 00:28:19.161 [2024-10-27 11:43:04.362158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.419 [2024-10-27 11:43:04.457462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.794  [2024-10-27T11:43:06.644Z] Copying: 673/1024 [MB] (673 MBps) [2024-10-27T11:43:08.029Z] Copying: 1024/1024 [MB] (average 672 MBps) 00:28:22.748 00:28:22.748 11:43:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:22.748 11:43:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:25.282 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:25.282 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=66d30c913d3428045dcc000bb5627238 00:28:25.282 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 66d30c913d3428045dcc000bb5627238 != \6\6\d\3\0\c\9\1\3\d\3\4\2\8\0\4\5\d\c\c\0\0\0\b\b\5\6\2\7\2\3\8 ]] 00:28:25.282 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:25.282 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:25.283 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:25.283 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:25.283 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:25.283 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80833 ]] 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80833 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80833 ']' 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80833 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80833 00:28:25.283 killing process with pid 80833 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80833' 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80833 00:28:25.283 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80833 00:28:25.544 [2024-10-27 11:43:10.619845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:25.544 [2024-10-27 11:43:10.632569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.544 [2024-10-27 11:43:10.632602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:25.544 [2024-10-27 11:43:10.632613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:25.544 [2024-10-27 11:43:10.632620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.544 [2024-10-27 11:43:10.632638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:25.544 [2024-10-27 11:43:10.634741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.544 [2024-10-27 11:43:10.634767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:25.544 [2024-10-27 11:43:10.634776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.092 ms 00:28:25.544 [2024-10-27 11:43:10.634785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.544 [2024-10-27 11:43:10.634967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.544 [2024-10-27 11:43:10.634975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:25.544 [2024-10-27 11:43:10.634981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.165 ms 00:28:25.544 [2024-10-27 11:43:10.634987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.544 [2024-10-27 11:43:10.636041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.544 [2024-10-27 11:43:10.636145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:25.544 [2024-10-27 11:43:10.636157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.042 ms 00:28:25.544 [2024-10-27 11:43:10.636163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.544 [2024-10-27 11:43:10.637044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.544 [2024-10-27 11:43:10.637063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:25.545 [2024-10-27 11:43:10.637070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.856 ms 00:28:25.545 [2024-10-27 11:43:10.637077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.644566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.545 [2024-10-27 11:43:10.644594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:25.545 [2024-10-27 11:43:10.644602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.452 ms 00:28:25.545 [2024-10-27 11:43:10.644607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.648850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.545 [2024-10-27 11:43:10.648879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:25.545 [2024-10-27 11:43:10.648887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.210 ms 00:28:25.545 [2024-10-27 11:43:10.648894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.648945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.545 [2024-10-27 11:43:10.648952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:25.545 [2024-10-27 11:43:10.648958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:25.545 [2024-10-27 11:43:10.648964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.656155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.545 [2024-10-27 11:43:10.656188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:25.545 [2024-10-27 11:43:10.656194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.178 ms 00:28:25.545 [2024-10-27 11:43:10.656200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.663107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.545 [2024-10-27 11:43:10.663212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:25.545 [2024-10-27 11:43:10.663224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.880 ms 00:28:25.545 [2024-10-27 11:43:10.663230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.670300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.545 [2024-10-27 11:43:10.670324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:25.545 [2024-10-27 11:43:10.670331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.039 ms 00:28:25.545 [2024-10-27 11:43:10.670336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.677256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.545 [2024-10-27 11:43:10.677281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:25.545 [2024-10-27 11:43:10.677288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.872 ms 00:28:25.545 [2024-10-27 11:43:10.677304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.677330] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:25.545 [2024-10-27 11:43:10.677345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:25.545 [2024-10-27 11:43:10.677352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:25.545 [2024-10-27 11:43:10.677359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:25.545 [2024-10-27 11:43:10.677365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:25.545 [2024-10-27 11:43:10.677449] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:25.545 [2024-10-27 11:43:10.677455] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 08fea97b-55b0-4a3b-8fbf-5c6fef8e948f 00:28:25.545 [2024-10-27 11:43:10.677461] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:25.545 [2024-10-27 11:43:10.677488] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:25.545 [2024-10-27 11:43:10.677493] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:25.545 [2024-10-27 11:43:10.677499] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:25.545 [2024-10-27 11:43:10.677504] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:25.545 [2024-10-27 11:43:10.677510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:25.545 [2024-10-27 11:43:10.677515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:25.545 [2024-10-27 11:43:10.677520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:25.545 [2024-10-27 11:43:10.677525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:25.545 [2024-10-27 11:43:10.677531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.545 [2024-10-27 11:43:10.677538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:25.545 [2024-10-27 11:43:10.677547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:28:25.545 [2024-10-27 11:43:10.677553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.687304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.545 [2024-10-27 11:43:10.687327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:25.545 [2024-10-27 11:43:10.687334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.737 ms 00:28:25.545 [2024-10-27 11:43:10.687340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.687610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.545 [2024-10-27 11:43:10.687621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:25.545 [2024-10-27 11:43:10.687628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:28:25.545 [2024-10-27 11:43:10.687633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.721535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.545 [2024-10-27 11:43:10.721637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:25.545 [2024-10-27 11:43:10.721679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.545 [2024-10-27 11:43:10.721696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.722618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.545 [2024-10-27 11:43:10.722702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:25.545 [2024-10-27 11:43:10.722742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.545 [2024-10-27 11:43:10.722759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.722836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.545 [2024-10-27 11:43:10.722870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:25.545 [2024-10-27 11:43:10.722901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.545 [2024-10-27 11:43:10.722916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.722937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.545 [2024-10-27 11:43:10.722953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:25.545 [2024-10-27 11:43:10.722971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.545 [2024-10-27 11:43:10.722985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.545 [2024-10-27 11:43:10.782905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.545 [2024-10-27 11:43:10.783027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:25.545 [2024-10-27 11:43:10.783066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.545 [2024-10-27 11:43:10.783082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.805 [2024-10-27 11:43:10.831804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.805 [2024-10-27 11:43:10.831926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:25.805 [2024-10-27 11:43:10.831965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.805 [2024-10-27 11:43:10.831982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.805 [2024-10-27 11:43:10.832043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.806 [2024-10-27 11:43:10.832061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:25.806 [2024-10-27 11:43:10.832077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.806 [2024-10-27 11:43:10.832091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.806 [2024-10-27 11:43:10.832144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.806 [2024-10-27 11:43:10.832162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:25.806 [2024-10-27 11:43:10.832178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.806 [2024-10-27 11:43:10.832226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.806 [2024-10-27 11:43:10.832374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.806 [2024-10-27 11:43:10.832398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:25.806 [2024-10-27 11:43:10.832436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.806 [2024-10-27 11:43:10.832453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.806 [2024-10-27 11:43:10.832495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.806 [2024-10-27 11:43:10.832536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:25.806 [2024-10-27 11:43:10.832553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.806 [2024-10-27 11:43:10.832567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.806 [2024-10-27 11:43:10.832607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.806 [2024-10-27 11:43:10.832624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:25.806 [2024-10-27 11:43:10.832668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.806 [2024-10-27 11:43:10.832685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.806 [2024-10-27 11:43:10.832729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:25.806 [2024-10-27 11:43:10.832747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:25.806 [2024-10-27 11:43:10.832762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:25.806 [2024-10-27 11:43:10.832779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.806 [2024-10-27 11:43:10.832967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 200.377 ms, result 0 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:26.377 Remove shared memory files 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80613 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:26.377 ************************************ 00:28:26.377 END TEST ftl_upgrade_shutdown 00:28:26.377 ************************************ 00:28:26.377 00:28:26.377 real 1m24.333s 00:28:26.377 user 1m54.280s 00:28:26.377 sys 0m20.342s 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:26.377 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:26.377 11:43:11 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:26.377 11:43:11 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:26.377 11:43:11 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:26.377 11:43:11 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:26.377 11:43:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:26.377 ************************************ 00:28:26.377 START TEST ftl_restore_fast 00:28:26.377 ************************************ 00:28:26.377 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:26.377 * Looking for test storage... 00:28:26.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:26.377 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:26.377 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1689 -- # lcov --version 00:28:26.377 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:26.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.639 --rc genhtml_branch_coverage=1 00:28:26.639 --rc genhtml_function_coverage=1 00:28:26.639 --rc genhtml_legend=1 00:28:26.639 --rc geninfo_all_blocks=1 00:28:26.639 --rc geninfo_unexecuted_blocks=1 00:28:26.639 00:28:26.639 ' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:26.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.639 --rc genhtml_branch_coverage=1 00:28:26.639 --rc genhtml_function_coverage=1 00:28:26.639 --rc genhtml_legend=1 00:28:26.639 --rc geninfo_all_blocks=1 00:28:26.639 --rc geninfo_unexecuted_blocks=1 00:28:26.639 00:28:26.639 ' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:26.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.639 --rc genhtml_branch_coverage=1 00:28:26.639 --rc genhtml_function_coverage=1 00:28:26.639 --rc genhtml_legend=1 00:28:26.639 --rc geninfo_all_blocks=1 00:28:26.639 --rc geninfo_unexecuted_blocks=1 00:28:26.639 00:28:26.639 ' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:26.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.639 --rc genhtml_branch_coverage=1 00:28:26.639 --rc genhtml_function_coverage=1 00:28:26.639 --rc genhtml_legend=1 00:28:26.639 --rc geninfo_all_blocks=1 00:28:26.639 --rc geninfo_unexecuted_blocks=1 00:28:26.639 00:28:26.639 ' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.cn6mt3j44R 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=81131 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 81131 00:28:26.639 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 81131 ']' 00:28:26.640 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.640 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:26.640 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.640 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:26.640 11:43:11 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:26.640 11:43:11 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:26.640 [2024-10-27 11:43:11.806404] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:28:26.640 [2024-10-27 11:43:11.806839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81131 ] 00:28:26.900 [2024-10-27 11:43:11.969454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.900 [2024-10-27 11:43:12.057187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.471 11:43:12 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:27.471 11:43:12 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:27.471 11:43:12 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:27.471 11:43:12 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:27.471 11:43:12 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:27.471 11:43:12 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:27.471 11:43:12 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:27.471 11:43:12 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:27.733 11:43:12 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:27.733 11:43:12 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:27.733 11:43:12 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:27.733 11:43:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:27.733 11:43:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:27.733 11:43:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:27.733 11:43:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:27.733 11:43:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:27.993 { 00:28:27.993 "name": "nvme0n1", 00:28:27.993 "aliases": [ 00:28:27.993 "7be006a8-58f4-4adb-a922-9f44ebcba11a" 00:28:27.993 ], 00:28:27.993 "product_name": "NVMe disk", 00:28:27.993 "block_size": 4096, 00:28:27.993 "num_blocks": 1310720, 00:28:27.993 "uuid": "7be006a8-58f4-4adb-a922-9f44ebcba11a", 00:28:27.993 "numa_id": -1, 00:28:27.993 "assigned_rate_limits": { 00:28:27.993 "rw_ios_per_sec": 0, 00:28:27.993 "rw_mbytes_per_sec": 0, 00:28:27.993 "r_mbytes_per_sec": 0, 00:28:27.993 "w_mbytes_per_sec": 0 00:28:27.993 }, 00:28:27.993 "claimed": true, 00:28:27.993 "claim_type": "read_many_write_one", 00:28:27.993 "zoned": false, 00:28:27.993 "supported_io_types": { 00:28:27.993 "read": true, 00:28:27.993 "write": true, 00:28:27.993 "unmap": true, 00:28:27.993 "flush": true, 00:28:27.993 "reset": true, 00:28:27.993 "nvme_admin": true, 00:28:27.993 "nvme_io": true, 00:28:27.993 "nvme_io_md": false, 00:28:27.993 "write_zeroes": true, 00:28:27.993 "zcopy": false, 00:28:27.993 "get_zone_info": false, 00:28:27.993 "zone_management": false, 00:28:27.993 "zone_append": false, 00:28:27.993 "compare": true, 00:28:27.993 "compare_and_write": false, 00:28:27.993 "abort": true, 00:28:27.993 "seek_hole": false, 00:28:27.993 "seek_data": false, 00:28:27.993 "copy": true, 00:28:27.993 "nvme_iov_md": false 00:28:27.993 }, 00:28:27.993 "driver_specific": { 00:28:27.993 "nvme": [ 00:28:27.993 { 00:28:27.993 "pci_address": "0000:00:11.0", 00:28:27.993 "trid": { 00:28:27.993 "trtype": "PCIe", 00:28:27.993 "traddr": "0000:00:11.0" 00:28:27.993 }, 00:28:27.993 "ctrlr_data": { 00:28:27.993 "cntlid": 0, 00:28:27.993 "vendor_id": "0x1b36", 00:28:27.993 "model_number": "QEMU NVMe Ctrl", 00:28:27.993 "serial_number": "12341", 00:28:27.993 "firmware_revision": "8.0.0", 00:28:27.993 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:27.993 "oacs": { 00:28:27.993 "security": 0, 00:28:27.993 "format": 1, 00:28:27.993 "firmware": 0, 00:28:27.993 "ns_manage": 1 00:28:27.993 }, 00:28:27.993 "multi_ctrlr": false, 00:28:27.993 "ana_reporting": false 00:28:27.993 }, 00:28:27.993 "vs": { 00:28:27.993 "nvme_version": "1.4" 00:28:27.993 }, 00:28:27.993 "ns_data": { 00:28:27.993 "id": 1, 00:28:27.993 "can_share": false 00:28:27.993 } 00:28:27.993 } 00:28:27.993 ], 00:28:27.993 "mp_policy": "active_passive" 00:28:27.993 } 00:28:27.993 } 00:28:27.993 ]' 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:27.993 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:28.254 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=cc0f9fc9-fe87-42c7-8ad4-44076ec0c3d5 00:28:28.254 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:28.254 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc0f9fc9-fe87-42c7-8ad4-44076ec0c3d5 00:28:28.515 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:28.515 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=2ddc0898-07da-45e3-9f28-717f998662e9 00:28:28.515 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2ddc0898-07da-45e3-9f28-717f998662e9 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:28.776 11:43:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:29.038 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:29.038 { 00:28:29.038 "name": "e8d62db8-8f2c-40f5-9d33-83ec9937ed06", 00:28:29.038 "aliases": [ 00:28:29.038 "lvs/nvme0n1p0" 00:28:29.038 ], 00:28:29.038 "product_name": "Logical Volume", 00:28:29.038 "block_size": 4096, 00:28:29.038 "num_blocks": 26476544, 00:28:29.038 "uuid": "e8d62db8-8f2c-40f5-9d33-83ec9937ed06", 00:28:29.038 "assigned_rate_limits": { 00:28:29.038 "rw_ios_per_sec": 0, 00:28:29.038 "rw_mbytes_per_sec": 0, 00:28:29.038 "r_mbytes_per_sec": 0, 00:28:29.038 "w_mbytes_per_sec": 0 00:28:29.038 }, 00:28:29.038 "claimed": false, 00:28:29.038 "zoned": false, 00:28:29.038 "supported_io_types": { 00:28:29.038 "read": true, 00:28:29.038 "write": true, 00:28:29.038 "unmap": true, 00:28:29.038 "flush": false, 00:28:29.038 "reset": true, 00:28:29.038 "nvme_admin": false, 00:28:29.038 "nvme_io": false, 00:28:29.038 "nvme_io_md": false, 00:28:29.038 "write_zeroes": true, 00:28:29.038 "zcopy": false, 00:28:29.038 "get_zone_info": false, 00:28:29.038 "zone_management": false, 00:28:29.038 "zone_append": false, 00:28:29.038 "compare": false, 00:28:29.038 "compare_and_write": false, 00:28:29.038 "abort": false, 00:28:29.038 "seek_hole": true, 00:28:29.038 "seek_data": true, 00:28:29.038 "copy": false, 00:28:29.038 "nvme_iov_md": false 00:28:29.038 }, 00:28:29.038 "driver_specific": { 00:28:29.038 "lvol": { 00:28:29.038 "lvol_store_uuid": "2ddc0898-07da-45e3-9f28-717f998662e9", 00:28:29.038 "base_bdev": "nvme0n1", 00:28:29.038 "thin_provision": true, 00:28:29.038 "num_allocated_clusters": 0, 00:28:29.038 "snapshot": false, 00:28:29.038 "clone": false, 00:28:29.038 "esnap_clone": false 00:28:29.038 } 00:28:29.038 } 00:28:29.038 } 00:28:29.038 ]' 00:28:29.038 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:29.038 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:29.038 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:29.038 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:29.038 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:29.038 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:29.038 11:43:14 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:29.038 11:43:14 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:29.038 11:43:14 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:29.298 11:43:14 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:29.298 11:43:14 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:29.298 11:43:14 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:29.298 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:29.298 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:29.298 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:29.298 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:29.298 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:29.559 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:29.559 { 00:28:29.559 "name": "e8d62db8-8f2c-40f5-9d33-83ec9937ed06", 00:28:29.559 "aliases": [ 00:28:29.559 "lvs/nvme0n1p0" 00:28:29.559 ], 00:28:29.559 "product_name": "Logical Volume", 00:28:29.559 "block_size": 4096, 00:28:29.559 "num_blocks": 26476544, 00:28:29.559 "uuid": "e8d62db8-8f2c-40f5-9d33-83ec9937ed06", 00:28:29.559 "assigned_rate_limits": { 00:28:29.559 "rw_ios_per_sec": 0, 00:28:29.559 "rw_mbytes_per_sec": 0, 00:28:29.559 "r_mbytes_per_sec": 0, 00:28:29.559 "w_mbytes_per_sec": 0 00:28:29.559 }, 00:28:29.559 "claimed": false, 00:28:29.559 "zoned": false, 00:28:29.559 "supported_io_types": { 00:28:29.559 "read": true, 00:28:29.559 "write": true, 00:28:29.559 "unmap": true, 00:28:29.559 "flush": false, 00:28:29.559 "reset": true, 00:28:29.559 "nvme_admin": false, 00:28:29.559 "nvme_io": false, 00:28:29.559 "nvme_io_md": false, 00:28:29.559 "write_zeroes": true, 00:28:29.559 "zcopy": false, 00:28:29.559 "get_zone_info": false, 00:28:29.559 "zone_management": false, 00:28:29.559 "zone_append": false, 00:28:29.559 "compare": false, 00:28:29.559 "compare_and_write": false, 00:28:29.559 "abort": false, 00:28:29.559 "seek_hole": true, 00:28:29.559 "seek_data": true, 00:28:29.559 "copy": false, 00:28:29.559 "nvme_iov_md": false 00:28:29.559 }, 00:28:29.559 "driver_specific": { 00:28:29.559 "lvol": { 00:28:29.559 "lvol_store_uuid": "2ddc0898-07da-45e3-9f28-717f998662e9", 00:28:29.559 "base_bdev": "nvme0n1", 00:28:29.559 "thin_provision": true, 00:28:29.559 "num_allocated_clusters": 0, 00:28:29.559 "snapshot": false, 00:28:29.559 "clone": false, 00:28:29.559 "esnap_clone": false 00:28:29.559 } 00:28:29.559 } 00:28:29.559 } 00:28:29.559 ]' 00:28:29.559 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:29.559 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:29.559 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:29.559 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:29.559 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:29.559 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:29.559 11:43:14 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:29.559 11:43:14 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:29.821 11:43:14 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:29.821 11:43:14 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:29.821 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:29.821 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:29.821 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:29.821 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:29.821 11:43:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e8d62db8-8f2c-40f5-9d33-83ec9937ed06 00:28:30.082 11:43:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:30.082 { 00:28:30.082 "name": "e8d62db8-8f2c-40f5-9d33-83ec9937ed06", 00:28:30.082 "aliases": [ 00:28:30.082 "lvs/nvme0n1p0" 00:28:30.082 ], 00:28:30.082 "product_name": "Logical Volume", 00:28:30.082 "block_size": 4096, 00:28:30.082 "num_blocks": 26476544, 00:28:30.082 "uuid": "e8d62db8-8f2c-40f5-9d33-83ec9937ed06", 00:28:30.082 "assigned_rate_limits": { 00:28:30.082 "rw_ios_per_sec": 0, 00:28:30.082 "rw_mbytes_per_sec": 0, 00:28:30.082 "r_mbytes_per_sec": 0, 00:28:30.082 "w_mbytes_per_sec": 0 00:28:30.082 }, 00:28:30.082 "claimed": false, 00:28:30.082 "zoned": false, 00:28:30.082 "supported_io_types": { 00:28:30.082 "read": true, 00:28:30.082 "write": true, 00:28:30.082 "unmap": true, 00:28:30.082 "flush": false, 00:28:30.082 "reset": true, 00:28:30.082 "nvme_admin": false, 00:28:30.082 "nvme_io": false, 00:28:30.082 "nvme_io_md": false, 00:28:30.082 "write_zeroes": true, 00:28:30.082 "zcopy": false, 00:28:30.082 "get_zone_info": false, 00:28:30.082 "zone_management": false, 00:28:30.082 "zone_append": false, 00:28:30.082 "compare": false, 00:28:30.082 "compare_and_write": false, 00:28:30.082 "abort": false, 00:28:30.082 "seek_hole": true, 00:28:30.082 "seek_data": true, 00:28:30.082 "copy": false, 00:28:30.082 "nvme_iov_md": false 00:28:30.082 }, 00:28:30.082 "driver_specific": { 00:28:30.082 "lvol": { 00:28:30.082 "lvol_store_uuid": "2ddc0898-07da-45e3-9f28-717f998662e9", 00:28:30.082 "base_bdev": "nvme0n1", 00:28:30.082 "thin_provision": true, 00:28:30.082 "num_allocated_clusters": 0, 00:28:30.082 "snapshot": false, 00:28:30.082 "clone": false, 00:28:30.082 "esnap_clone": false 00:28:30.082 } 00:28:30.082 } 00:28:30.082 } 00:28:30.082 ]' 00:28:30.082 11:43:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:30.082 11:43:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:30.082 11:43:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:30.083 11:43:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:30.083 11:43:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:30.083 11:43:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:30.083 11:43:15 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:30.083 11:43:15 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e8d62db8-8f2c-40f5-9d33-83ec9937ed06 --l2p_dram_limit 10' 00:28:30.083 11:43:15 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:30.083 11:43:15 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:30.083 11:43:15 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:30.083 11:43:15 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:30.083 11:43:15 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:30.083 11:43:15 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e8d62db8-8f2c-40f5-9d33-83ec9937ed06 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:30.343 [2024-10-27 11:43:15.432396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.343 [2024-10-27 11:43:15.432438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:30.343 [2024-10-27 11:43:15.432449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:30.343 [2024-10-27 11:43:15.432456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.343 [2024-10-27 11:43:15.432499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.343 [2024-10-27 11:43:15.432506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:30.343 [2024-10-27 11:43:15.432514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:30.343 [2024-10-27 11:43:15.432520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.343 [2024-10-27 11:43:15.432539] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:30.343 [2024-10-27 11:43:15.433115] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:30.343 [2024-10-27 11:43:15.433130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.344 [2024-10-27 11:43:15.433136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:30.344 [2024-10-27 11:43:15.433143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:28:30.344 [2024-10-27 11:43:15.433149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.344 [2024-10-27 11:43:15.433210] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab 00:28:30.344 [2024-10-27 11:43:15.434173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.344 [2024-10-27 11:43:15.434201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:30.344 [2024-10-27 11:43:15.434209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:30.344 [2024-10-27 11:43:15.434216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.344 [2024-10-27 11:43:15.439031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.344 [2024-10-27 11:43:15.439063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:30.344 [2024-10-27 11:43:15.439071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.783 ms 00:28:30.344 [2024-10-27 11:43:15.439080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.344 [2024-10-27 11:43:15.439146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.344 [2024-10-27 11:43:15.439156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:30.344 [2024-10-27 11:43:15.439162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:30.344 [2024-10-27 11:43:15.439172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.344 [2024-10-27 11:43:15.439211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.344 [2024-10-27 11:43:15.439220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:30.344 [2024-10-27 11:43:15.439226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:30.344 [2024-10-27 11:43:15.439234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.344 [2024-10-27 11:43:15.439252] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:30.344 [2024-10-27 11:43:15.442104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.344 [2024-10-27 11:43:15.442133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:30.344 [2024-10-27 11:43:15.442143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.855 ms 00:28:30.344 [2024-10-27 11:43:15.442152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.344 [2024-10-27 11:43:15.442182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.344 [2024-10-27 11:43:15.442189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:30.344 [2024-10-27 11:43:15.442196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:30.344 [2024-10-27 11:43:15.442202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.344 [2024-10-27 11:43:15.442216] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:30.344 [2024-10-27 11:43:15.442329] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:30.344 [2024-10-27 11:43:15.442342] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:30.344 [2024-10-27 11:43:15.442350] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:30.344 [2024-10-27 11:43:15.442359] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:30.344 [2024-10-27 11:43:15.442365] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:30.344 [2024-10-27 11:43:15.442373] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:30.344 [2024-10-27 11:43:15.442379] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:30.344 [2024-10-27 11:43:15.442386] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:30.344 [2024-10-27 11:43:15.442391] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:30.344 [2024-10-27 11:43:15.442400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.344 [2024-10-27 11:43:15.442406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:30.344 [2024-10-27 11:43:15.442413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:28:30.344 [2024-10-27 11:43:15.442425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.344 [2024-10-27 11:43:15.442489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.344 [2024-10-27 11:43:15.442495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:30.344 [2024-10-27 11:43:15.442502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:30.344 [2024-10-27 11:43:15.442507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.344 [2024-10-27 11:43:15.442581] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:30.344 [2024-10-27 11:43:15.442590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:30.344 [2024-10-27 11:43:15.442597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.344 [2024-10-27 11:43:15.442603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:30.344 [2024-10-27 11:43:15.442615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:30.344 [2024-10-27 11:43:15.442627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:30.344 [2024-10-27 11:43:15.442634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.344 [2024-10-27 11:43:15.442645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:30.344 [2024-10-27 11:43:15.442650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:30.344 [2024-10-27 11:43:15.442657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.344 [2024-10-27 11:43:15.442661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:30.344 [2024-10-27 11:43:15.442668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:30.344 [2024-10-27 11:43:15.442673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:30.344 [2024-10-27 11:43:15.442686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:30.344 [2024-10-27 11:43:15.442693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:30.344 [2024-10-27 11:43:15.442705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.344 [2024-10-27 11:43:15.442716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:30.344 [2024-10-27 11:43:15.442721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.344 [2024-10-27 11:43:15.442732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:30.344 [2024-10-27 11:43:15.442738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.344 [2024-10-27 11:43:15.442749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:30.344 [2024-10-27 11:43:15.442754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.344 [2024-10-27 11:43:15.442765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:30.344 [2024-10-27 11:43:15.442773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.344 [2024-10-27 11:43:15.442784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:30.344 [2024-10-27 11:43:15.442789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:30.344 [2024-10-27 11:43:15.442795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.344 [2024-10-27 11:43:15.442800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:30.344 [2024-10-27 11:43:15.442806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:30.344 [2024-10-27 11:43:15.442810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:30.344 [2024-10-27 11:43:15.442822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:30.344 [2024-10-27 11:43:15.442828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442833] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:30.344 [2024-10-27 11:43:15.442846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:30.344 [2024-10-27 11:43:15.442851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.344 [2024-10-27 11:43:15.442859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.344 [2024-10-27 11:43:15.442865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:30.344 [2024-10-27 11:43:15.442873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:30.344 [2024-10-27 11:43:15.442878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:30.344 [2024-10-27 11:43:15.442885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:30.344 [2024-10-27 11:43:15.442890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:30.344 [2024-10-27 11:43:15.442896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:30.344 [2024-10-27 11:43:15.442904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:30.344 [2024-10-27 11:43:15.442912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.344 [2024-10-27 11:43:15.442919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:30.344 [2024-10-27 11:43:15.442925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:30.344 [2024-10-27 11:43:15.442931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:30.344 [2024-10-27 11:43:15.442938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:30.344 [2024-10-27 11:43:15.442943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:30.344 [2024-10-27 11:43:15.442949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:30.345 [2024-10-27 11:43:15.442955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:30.345 [2024-10-27 11:43:15.442961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:30.345 [2024-10-27 11:43:15.442967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:30.345 [2024-10-27 11:43:15.442974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:30.345 [2024-10-27 11:43:15.442980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:30.345 [2024-10-27 11:43:15.442986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:30.345 [2024-10-27 11:43:15.442991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:30.345 [2024-10-27 11:43:15.442998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:30.345 [2024-10-27 11:43:15.443003] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:30.345 [2024-10-27 11:43:15.443011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.345 [2024-10-27 11:43:15.443019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:30.345 [2024-10-27 11:43:15.443027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:30.345 [2024-10-27 11:43:15.443032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:30.345 [2024-10-27 11:43:15.443039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:30.345 [2024-10-27 11:43:15.443044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.345 [2024-10-27 11:43:15.443051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:30.345 [2024-10-27 11:43:15.443057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:28:30.345 [2024-10-27 11:43:15.443063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.345 [2024-10-27 11:43:15.443091] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:30.345 [2024-10-27 11:43:15.443101] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:34.549 [2024-10-27 11:43:19.272466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.272564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:34.549 [2024-10-27 11:43:19.272582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3829.355 ms 00:28:34.549 [2024-10-27 11:43:19.272593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.305093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.305171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:34.549 [2024-10-27 11:43:19.305186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.098 ms 00:28:34.549 [2024-10-27 11:43:19.305210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.305368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.305384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:34.549 [2024-10-27 11:43:19.305395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:28:34.549 [2024-10-27 11:43:19.305408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.341316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.341377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:34.549 [2024-10-27 11:43:19.341390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.850 ms 00:28:34.549 [2024-10-27 11:43:19.341403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.341440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.341452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:34.549 [2024-10-27 11:43:19.341462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:34.549 [2024-10-27 11:43:19.341475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.342090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.342119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:34.549 [2024-10-27 11:43:19.342132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:28:34.549 [2024-10-27 11:43:19.342143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.342261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.342292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:34.549 [2024-10-27 11:43:19.342322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:28:34.549 [2024-10-27 11:43:19.342336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.360908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.360956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:34.549 [2024-10-27 11:43:19.360968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.548 ms 00:28:34.549 [2024-10-27 11:43:19.360981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.374674] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:34.549 [2024-10-27 11:43:19.378684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.378727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:34.549 [2024-10-27 11:43:19.378743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.606 ms 00:28:34.549 [2024-10-27 11:43:19.378752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.484845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.484918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:34.549 [2024-10-27 11:43:19.484940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.051 ms 00:28:34.549 [2024-10-27 11:43:19.484950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.485149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.485162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:34.549 [2024-10-27 11:43:19.485178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:28:34.549 [2024-10-27 11:43:19.485191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.511052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.511112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:34.549 [2024-10-27 11:43:19.511129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.781 ms 00:28:34.549 [2024-10-27 11:43:19.511138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.536339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.536389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:34.549 [2024-10-27 11:43:19.536405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.164 ms 00:28:34.549 [2024-10-27 11:43:19.536413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.549 [2024-10-27 11:43:19.537022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.549 [2024-10-27 11:43:19.537035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:34.549 [2024-10-27 11:43:19.537047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:28:34.549 [2024-10-27 11:43:19.537055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.550 [2024-10-27 11:43:19.622934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.550 [2024-10-27 11:43:19.622992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:34.550 [2024-10-27 11:43:19.623012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.829 ms 00:28:34.550 [2024-10-27 11:43:19.623022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.550 [2024-10-27 11:43:19.650673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.550 [2024-10-27 11:43:19.650731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:34.550 [2024-10-27 11:43:19.650752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.543 ms 00:28:34.550 [2024-10-27 11:43:19.650760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.550 [2024-10-27 11:43:19.676860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.550 [2024-10-27 11:43:19.676915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:34.550 [2024-10-27 11:43:19.676930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.039 ms 00:28:34.550 [2024-10-27 11:43:19.676938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.550 [2024-10-27 11:43:19.703381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.550 [2024-10-27 11:43:19.703456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:34.550 [2024-10-27 11:43:19.703473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.384 ms 00:28:34.550 [2024-10-27 11:43:19.703481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.550 [2024-10-27 11:43:19.703542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.550 [2024-10-27 11:43:19.703554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:34.550 [2024-10-27 11:43:19.703569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:34.550 [2024-10-27 11:43:19.703577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.550 [2024-10-27 11:43:19.703677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.550 [2024-10-27 11:43:19.703690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:34.550 [2024-10-27 11:43:19.703701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:34.550 [2024-10-27 11:43:19.703709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.550 [2024-10-27 11:43:19.704883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4271.980 ms, result 0 00:28:34.550 { 00:28:34.550 "name": "ftl0", 00:28:34.550 "uuid": "ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab" 00:28:34.550 } 00:28:34.550 11:43:19 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:34.550 11:43:19 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:34.811 11:43:19 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:34.811 11:43:19 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:35.074 [2024-10-27 11:43:20.144266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.144366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:35.074 [2024-10-27 11:43:20.144382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:35.074 [2024-10-27 11:43:20.144402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.144429] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:35.074 [2024-10-27 11:43:20.147560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.147609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:35.074 [2024-10-27 11:43:20.147628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.107 ms 00:28:35.074 [2024-10-27 11:43:20.147637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.147915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.147928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:35.074 [2024-10-27 11:43:20.147940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:28:35.074 [2024-10-27 11:43:20.147952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.151351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.151379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:35.074 [2024-10-27 11:43:20.151392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.379 ms 00:28:35.074 [2024-10-27 11:43:20.151402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.157734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.157780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:35.074 [2024-10-27 11:43:20.157794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.282 ms 00:28:35.074 [2024-10-27 11:43:20.157803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.184883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.184937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:35.074 [2024-10-27 11:43:20.184954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.997 ms 00:28:35.074 [2024-10-27 11:43:20.184962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.202161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.202221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:35.074 [2024-10-27 11:43:20.202237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.136 ms 00:28:35.074 [2024-10-27 11:43:20.202246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.202439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.202453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:35.074 [2024-10-27 11:43:20.202467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:28:35.074 [2024-10-27 11:43:20.202475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.228541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.228595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:35.074 [2024-10-27 11:43:20.228610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.041 ms 00:28:35.074 [2024-10-27 11:43:20.228618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.253917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.253966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:35.074 [2024-10-27 11:43:20.253981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.239 ms 00:28:35.074 [2024-10-27 11:43:20.253989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.278795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.278848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:35.074 [2024-10-27 11:43:20.278862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.744 ms 00:28:35.074 [2024-10-27 11:43:20.278869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.303871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.074 [2024-10-27 11:43:20.303923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:35.074 [2024-10-27 11:43:20.303938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.897 ms 00:28:35.074 [2024-10-27 11:43:20.303946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.074 [2024-10-27 11:43:20.303998] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:35.074 [2024-10-27 11:43:20.304015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:35.074 [2024-10-27 11:43:20.304511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:35.075 [2024-10-27 11:43:20.304970] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:35.075 [2024-10-27 11:43:20.304979] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab 00:28:35.075 [2024-10-27 11:43:20.304988] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:35.075 [2024-10-27 11:43:20.305002] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:35.075 [2024-10-27 11:43:20.305011] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:35.075 [2024-10-27 11:43:20.305022] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:35.075 [2024-10-27 11:43:20.305033] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:35.075 [2024-10-27 11:43:20.305043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:35.075 [2024-10-27 11:43:20.305050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:35.075 [2024-10-27 11:43:20.305059] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:35.075 [2024-10-27 11:43:20.305065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:35.075 [2024-10-27 11:43:20.305076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.075 [2024-10-27 11:43:20.305084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:35.075 [2024-10-27 11:43:20.305094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:28:35.075 [2024-10-27 11:43:20.305102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.075 [2024-10-27 11:43:20.319354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.075 [2024-10-27 11:43:20.319403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:35.075 [2024-10-27 11:43:20.319417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.203 ms 00:28:35.075 [2024-10-27 11:43:20.319426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.075 [2024-10-27 11:43:20.319844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.075 [2024-10-27 11:43:20.319858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:35.075 [2024-10-27 11:43:20.319869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:28:35.075 [2024-10-27 11:43:20.319877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.365 [2024-10-27 11:43:20.367008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.365 [2024-10-27 11:43:20.367062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:35.365 [2024-10-27 11:43:20.367078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.365 [2024-10-27 11:43:20.367086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.365 [2024-10-27 11:43:20.367157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.365 [2024-10-27 11:43:20.367166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:35.365 [2024-10-27 11:43:20.367177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.365 [2024-10-27 11:43:20.367185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.365 [2024-10-27 11:43:20.367274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.365 [2024-10-27 11:43:20.367285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:35.365 [2024-10-27 11:43:20.367311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.365 [2024-10-27 11:43:20.367319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.365 [2024-10-27 11:43:20.367343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.365 [2024-10-27 11:43:20.367353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:35.365 [2024-10-27 11:43:20.367364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.365 [2024-10-27 11:43:20.367372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.365 [2024-10-27 11:43:20.451884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.365 [2024-10-27 11:43:20.451948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:35.365 [2024-10-27 11:43:20.451964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.365 [2024-10-27 11:43:20.451972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.365 [2024-10-27 11:43:20.521330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.365 [2024-10-27 11:43:20.521386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:35.365 [2024-10-27 11:43:20.521401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.365 [2024-10-27 11:43:20.521409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.365 [2024-10-27 11:43:20.521522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.365 [2024-10-27 11:43:20.521539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:35.365 [2024-10-27 11:43:20.521551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.365 [2024-10-27 11:43:20.521559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.365 [2024-10-27 11:43:20.521615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.365 [2024-10-27 11:43:20.521625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:35.366 [2024-10-27 11:43:20.521636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.366 [2024-10-27 11:43:20.521644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.366 [2024-10-27 11:43:20.521746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.366 [2024-10-27 11:43:20.521759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:35.366 [2024-10-27 11:43:20.521772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.366 [2024-10-27 11:43:20.521781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.366 [2024-10-27 11:43:20.521826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.366 [2024-10-27 11:43:20.521837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:35.366 [2024-10-27 11:43:20.521847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.366 [2024-10-27 11:43:20.521855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.366 [2024-10-27 11:43:20.521900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.366 [2024-10-27 11:43:20.521911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:35.366 [2024-10-27 11:43:20.521925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.366 [2024-10-27 11:43:20.521934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.366 [2024-10-27 11:43:20.521988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.366 [2024-10-27 11:43:20.522001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:35.366 [2024-10-27 11:43:20.522010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.366 [2024-10-27 11:43:20.522019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.366 [2024-10-27 11:43:20.522171] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.879 ms, result 0 00:28:35.366 true 00:28:35.366 11:43:20 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 81131 00:28:35.366 11:43:20 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 81131 ']' 00:28:35.366 11:43:20 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 81131 00:28:35.366 11:43:20 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:35.366 11:43:20 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:35.366 11:43:20 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81131 00:28:35.366 11:43:20 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:35.366 killing process with pid 81131 00:28:35.366 11:43:20 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:35.366 11:43:20 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81131' 00:28:35.366 11:43:20 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 81131 00:28:35.366 11:43:20 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 81131 00:28:41.966 11:43:27 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:46.162 262144+0 records in 00:28:46.162 262144+0 records out 00:28:46.162 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.60712 s, 298 MB/s 00:28:46.162 11:43:30 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:47.542 11:43:32 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:47.542 [2024-10-27 11:43:32.731956] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:28:47.542 [2024-10-27 11:43:32.732554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81356 ] 00:28:47.803 [2024-10-27 11:43:32.888408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.803 [2024-10-27 11:43:32.986742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.064 [2024-10-27 11:43:33.245559] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.064 [2024-10-27 11:43:33.245625] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.326 [2024-10-27 11:43:33.407745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.326 [2024-10-27 11:43:33.407806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:48.326 [2024-10-27 11:43:33.407826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:48.326 [2024-10-27 11:43:33.407835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.326 [2024-10-27 11:43:33.407894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.326 [2024-10-27 11:43:33.407905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:48.326 [2024-10-27 11:43:33.407917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:48.326 [2024-10-27 11:43:33.407925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.326 [2024-10-27 11:43:33.407948] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:48.326 [2024-10-27 11:43:33.408815] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:48.326 [2024-10-27 11:43:33.408872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.326 [2024-10-27 11:43:33.408881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:48.326 [2024-10-27 11:43:33.408892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:28:48.326 [2024-10-27 11:43:33.408901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.326 [2024-10-27 11:43:33.410773] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:48.326 [2024-10-27 11:43:33.423881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.327 [2024-10-27 11:43:33.423914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:48.327 [2024-10-27 11:43:33.423925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.112 ms 00:28:48.327 [2024-10-27 11:43:33.423934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.327 [2024-10-27 11:43:33.423989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.327 [2024-10-27 11:43:33.424000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:48.327 [2024-10-27 11:43:33.424010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:48.327 [2024-10-27 11:43:33.424017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.327 [2024-10-27 11:43:33.429232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.327 [2024-10-27 11:43:33.429259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:48.327 [2024-10-27 11:43:33.429268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.166 ms 00:28:48.327 [2024-10-27 11:43:33.429276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.327 [2024-10-27 11:43:33.429358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.327 [2024-10-27 11:43:33.429368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:48.327 [2024-10-27 11:43:33.429375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:48.327 [2024-10-27 11:43:33.429383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.327 [2024-10-27 11:43:33.429429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.327 [2024-10-27 11:43:33.429441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:48.327 [2024-10-27 11:43:33.429449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:48.327 [2024-10-27 11:43:33.429456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.327 [2024-10-27 11:43:33.429477] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:48.327 [2024-10-27 11:43:33.432870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.327 [2024-10-27 11:43:33.432898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:48.327 [2024-10-27 11:43:33.432907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.398 ms 00:28:48.327 [2024-10-27 11:43:33.432917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.327 [2024-10-27 11:43:33.432944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.327 [2024-10-27 11:43:33.432952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:48.327 [2024-10-27 11:43:33.432960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:48.327 [2024-10-27 11:43:33.432968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.327 [2024-10-27 11:43:33.432987] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:48.327 [2024-10-27 11:43:33.433005] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:48.327 [2024-10-27 11:43:33.433039] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:48.327 [2024-10-27 11:43:33.433057] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:48.327 [2024-10-27 11:43:33.433161] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:48.327 [2024-10-27 11:43:33.433171] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:48.327 [2024-10-27 11:43:33.433182] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:48.327 [2024-10-27 11:43:33.433193] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:48.327 [2024-10-27 11:43:33.433201] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:48.327 [2024-10-27 11:43:33.433230] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:48.327 [2024-10-27 11:43:33.433238] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:48.327 [2024-10-27 11:43:33.433246] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:48.327 [2024-10-27 11:43:33.433253] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:48.327 [2024-10-27 11:43:33.433263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.327 [2024-10-27 11:43:33.433272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:48.327 [2024-10-27 11:43:33.433280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:28:48.327 [2024-10-27 11:43:33.433288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.327 [2024-10-27 11:43:33.433382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.327 [2024-10-27 11:43:33.433391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:48.327 [2024-10-27 11:43:33.433399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:48.327 [2024-10-27 11:43:33.433406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.327 [2024-10-27 11:43:33.433519] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:48.327 [2024-10-27 11:43:33.433533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:48.327 [2024-10-27 11:43:33.433541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.327 [2024-10-27 11:43:33.433550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:48.327 [2024-10-27 11:43:33.433565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:48.327 [2024-10-27 11:43:33.433580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:48.327 [2024-10-27 11:43:33.433588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.327 [2024-10-27 11:43:33.433603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:48.327 [2024-10-27 11:43:33.433609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:48.327 [2024-10-27 11:43:33.433616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.327 [2024-10-27 11:43:33.433623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:48.327 [2024-10-27 11:43:33.433630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:48.327 [2024-10-27 11:43:33.433642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:48.327 [2024-10-27 11:43:33.433657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:48.327 [2024-10-27 11:43:33.433663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:48.327 [2024-10-27 11:43:33.433676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.327 [2024-10-27 11:43:33.433689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:48.327 [2024-10-27 11:43:33.433696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.327 [2024-10-27 11:43:33.433709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:48.327 [2024-10-27 11:43:33.433716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.327 [2024-10-27 11:43:33.433729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:48.327 [2024-10-27 11:43:33.433735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.327 [2024-10-27 11:43:33.433748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:48.327 [2024-10-27 11:43:33.433754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.327 [2024-10-27 11:43:33.433767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:48.327 [2024-10-27 11:43:33.433774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:48.327 [2024-10-27 11:43:33.433780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.327 [2024-10-27 11:43:33.433786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:48.327 [2024-10-27 11:43:33.433793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:48.327 [2024-10-27 11:43:33.433799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:48.327 [2024-10-27 11:43:33.433814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:48.327 [2024-10-27 11:43:33.433821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433828] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:48.327 [2024-10-27 11:43:33.433836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:48.327 [2024-10-27 11:43:33.433844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.327 [2024-10-27 11:43:33.433851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.327 [2024-10-27 11:43:33.433858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:48.327 [2024-10-27 11:43:33.433865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:48.327 [2024-10-27 11:43:33.433873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:48.327 [2024-10-27 11:43:33.433880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:48.327 [2024-10-27 11:43:33.433886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:48.327 [2024-10-27 11:43:33.433893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:48.327 [2024-10-27 11:43:33.433901] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:48.328 [2024-10-27 11:43:33.433909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.328 [2024-10-27 11:43:33.433918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:48.328 [2024-10-27 11:43:33.433925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:48.328 [2024-10-27 11:43:33.433931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:48.328 [2024-10-27 11:43:33.433939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:48.328 [2024-10-27 11:43:33.433946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:48.328 [2024-10-27 11:43:33.433953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:48.328 [2024-10-27 11:43:33.433960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:48.328 [2024-10-27 11:43:33.433967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:48.328 [2024-10-27 11:43:33.433974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:48.328 [2024-10-27 11:43:33.433982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:48.328 [2024-10-27 11:43:33.433989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:48.328 [2024-10-27 11:43:33.433996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:48.328 [2024-10-27 11:43:33.434003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:48.328 [2024-10-27 11:43:33.434010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:48.328 [2024-10-27 11:43:33.434017] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:48.328 [2024-10-27 11:43:33.434025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.328 [2024-10-27 11:43:33.434035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:48.328 [2024-10-27 11:43:33.434043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:48.328 [2024-10-27 11:43:33.434051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:48.328 [2024-10-27 11:43:33.434058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:48.328 [2024-10-27 11:43:33.434065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.434073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:48.328 [2024-10-27 11:43:33.434080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:28:48.328 [2024-10-27 11:43:33.434087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.460901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.460936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:48.328 [2024-10-27 11:43:33.460948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.774 ms 00:28:48.328 [2024-10-27 11:43:33.460956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.461041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.461052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:48.328 [2024-10-27 11:43:33.461060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:48.328 [2024-10-27 11:43:33.461067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.505821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.505862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:48.328 [2024-10-27 11:43:33.505874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.706 ms 00:28:48.328 [2024-10-27 11:43:33.505882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.505922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.505931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:48.328 [2024-10-27 11:43:33.505940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:48.328 [2024-10-27 11:43:33.505950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.506361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.506390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:48.328 [2024-10-27 11:43:33.506400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:28:48.328 [2024-10-27 11:43:33.506407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.506535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.506545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:48.328 [2024-10-27 11:43:33.506553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:28:48.328 [2024-10-27 11:43:33.506562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.520060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.520092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:48.328 [2024-10-27 11:43:33.520103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.472 ms 00:28:48.328 [2024-10-27 11:43:33.520113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.533249] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:48.328 [2024-10-27 11:43:33.533289] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:48.328 [2024-10-27 11:43:33.533308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.533317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:48.328 [2024-10-27 11:43:33.533326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.104 ms 00:28:48.328 [2024-10-27 11:43:33.533333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.557766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.557800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:48.328 [2024-10-27 11:43:33.557816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.393 ms 00:28:48.328 [2024-10-27 11:43:33.557823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.570001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.570041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:48.328 [2024-10-27 11:43:33.570051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.136 ms 00:28:48.328 [2024-10-27 11:43:33.570058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.581616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.581647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:48.328 [2024-10-27 11:43:33.581657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.523 ms 00:28:48.328 [2024-10-27 11:43:33.581664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.328 [2024-10-27 11:43:33.582271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.328 [2024-10-27 11:43:33.582306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:48.328 [2024-10-27 11:43:33.582317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:28:48.328 [2024-10-27 11:43:33.582324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.590 [2024-10-27 11:43:33.640308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.590 [2024-10-27 11:43:33.640366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:48.590 [2024-10-27 11:43:33.640382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.952 ms 00:28:48.590 [2024-10-27 11:43:33.640390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.590 [2024-10-27 11:43:33.650950] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:48.590 [2024-10-27 11:43:33.653584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.590 [2024-10-27 11:43:33.653622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:48.590 [2024-10-27 11:43:33.653634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.126 ms 00:28:48.590 [2024-10-27 11:43:33.653643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.590 [2024-10-27 11:43:33.653764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.590 [2024-10-27 11:43:33.653777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:48.590 [2024-10-27 11:43:33.653787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:48.590 [2024-10-27 11:43:33.653799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.590 [2024-10-27 11:43:33.653874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.590 [2024-10-27 11:43:33.653888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:48.590 [2024-10-27 11:43:33.653898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:48.590 [2024-10-27 11:43:33.653906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.590 [2024-10-27 11:43:33.653926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.590 [2024-10-27 11:43:33.653936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:48.590 [2024-10-27 11:43:33.653944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:48.590 [2024-10-27 11:43:33.653952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.590 [2024-10-27 11:43:33.653984] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:48.590 [2024-10-27 11:43:33.653994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.590 [2024-10-27 11:43:33.654002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:48.590 [2024-10-27 11:43:33.654014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:48.590 [2024-10-27 11:43:33.654022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.590 [2024-10-27 11:43:33.678757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.590 [2024-10-27 11:43:33.678803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:48.591 [2024-10-27 11:43:33.678816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.715 ms 00:28:48.591 [2024-10-27 11:43:33.678825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.591 [2024-10-27 11:43:33.678917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.591 [2024-10-27 11:43:33.678927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:48.591 [2024-10-27 11:43:33.678937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:48.591 [2024-10-27 11:43:33.678944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.591 [2024-10-27 11:43:33.680604] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.353 ms, result 0 00:28:49.533  [2024-10-27T11:43:35.758Z] Copying: 10/1024 [MB] (10 MBps) [2024-10-27T11:43:36.704Z] Copying: 30/1024 [MB] (19 MBps) [2024-10-27T11:43:38.093Z] Copying: 45/1024 [MB] (15 MBps) [2024-10-27T11:43:39.038Z] Copying: 71/1024 [MB] (25 MBps) [2024-10-27T11:43:39.981Z] Copying: 97/1024 [MB] (25 MBps) [2024-10-27T11:43:40.948Z] Copying: 109/1024 [MB] (12 MBps) [2024-10-27T11:43:41.892Z] Copying: 134/1024 [MB] (25 MBps) [2024-10-27T11:43:42.834Z] Copying: 145/1024 [MB] (10 MBps) [2024-10-27T11:43:43.776Z] Copying: 173/1024 [MB] (27 MBps) [2024-10-27T11:43:44.718Z] Copying: 198/1024 [MB] (24 MBps) [2024-10-27T11:43:46.104Z] Copying: 223/1024 [MB] (25 MBps) [2024-10-27T11:43:47.048Z] Copying: 249/1024 [MB] (25 MBps) [2024-10-27T11:43:47.991Z] Copying: 271/1024 [MB] (21 MBps) [2024-10-27T11:43:48.935Z] Copying: 283/1024 [MB] (11 MBps) [2024-10-27T11:43:49.878Z] Copying: 307/1024 [MB] (23 MBps) [2024-10-27T11:43:50.821Z] Copying: 317/1024 [MB] (10 MBps) [2024-10-27T11:43:51.766Z] Copying: 334840/1048576 [kB] (10096 kBps) [2024-10-27T11:43:52.711Z] Copying: 353/1024 [MB] (26 MBps) [2024-10-27T11:43:54.100Z] Copying: 375/1024 [MB] (21 MBps) [2024-10-27T11:43:55.087Z] Copying: 385/1024 [MB] (10 MBps) [2024-10-27T11:43:56.032Z] Copying: 399/1024 [MB] (14 MBps) [2024-10-27T11:43:56.976Z] Copying: 423/1024 [MB] (23 MBps) [2024-10-27T11:43:57.922Z] Copying: 446/1024 [MB] (22 MBps) [2024-10-27T11:43:58.866Z] Copying: 465/1024 [MB] (19 MBps) [2024-10-27T11:43:59.810Z] Copying: 479/1024 [MB] (14 MBps) [2024-10-27T11:44:00.754Z] Copying: 491/1024 [MB] (12 MBps) [2024-10-27T11:44:01.697Z] Copying: 511/1024 [MB] (20 MBps) [2024-10-27T11:44:03.085Z] Copying: 534/1024 [MB] (22 MBps) [2024-10-27T11:44:04.030Z] Copying: 556/1024 [MB] (22 MBps) [2024-10-27T11:44:04.974Z] Copying: 579/1024 [MB] (22 MBps) [2024-10-27T11:44:05.918Z] Copying: 604/1024 [MB] (25 MBps) [2024-10-27T11:44:06.861Z] Copying: 628/1024 [MB] (23 MBps) [2024-10-27T11:44:07.806Z] Copying: 662/1024 [MB] (34 MBps) [2024-10-27T11:44:08.748Z] Copying: 684/1024 [MB] (21 MBps) [2024-10-27T11:44:10.136Z] Copying: 707/1024 [MB] (23 MBps) [2024-10-27T11:44:10.709Z] Copying: 734/1024 [MB] (26 MBps) [2024-10-27T11:44:12.097Z] Copying: 747/1024 [MB] (13 MBps) [2024-10-27T11:44:13.041Z] Copying: 772/1024 [MB] (24 MBps) [2024-10-27T11:44:13.985Z] Copying: 797/1024 [MB] (25 MBps) [2024-10-27T11:44:14.929Z] Copying: 819/1024 [MB] (22 MBps) [2024-10-27T11:44:15.873Z] Copying: 841/1024 [MB] (21 MBps) [2024-10-27T11:44:16.816Z] Copying: 867/1024 [MB] (26 MBps) [2024-10-27T11:44:17.760Z] Copying: 893/1024 [MB] (25 MBps) [2024-10-27T11:44:18.702Z] Copying: 914/1024 [MB] (21 MBps) [2024-10-27T11:44:20.087Z] Copying: 937/1024 [MB] (22 MBps) [2024-10-27T11:44:21.030Z] Copying: 959/1024 [MB] (22 MBps) [2024-10-27T11:44:21.974Z] Copying: 971/1024 [MB] (12 MBps) [2024-10-27T11:44:21.974Z] Copying: 1012/1024 [MB] (41 MBps) [2024-10-27T11:44:21.974Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-10-27 11:44:21.917280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.693 [2024-10-27 11:44:21.917326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:36.693 [2024-10-27 11:44:21.917338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:36.693 [2024-10-27 11:44:21.917345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.693 [2024-10-27 11:44:21.917364] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:36.693 [2024-10-27 11:44:21.919489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.693 [2024-10-27 11:44:21.919515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:36.693 [2024-10-27 11:44:21.919524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.113 ms 00:29:36.693 [2024-10-27 11:44:21.919532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.693 [2024-10-27 11:44:21.921360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.693 [2024-10-27 11:44:21.921388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:36.693 [2024-10-27 11:44:21.921396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:29:36.693 [2024-10-27 11:44:21.921402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.693 [2024-10-27 11:44:21.921422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.693 [2024-10-27 11:44:21.921429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:36.693 [2024-10-27 11:44:21.921436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:36.693 [2024-10-27 11:44:21.921442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.693 [2024-10-27 11:44:21.921478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.693 [2024-10-27 11:44:21.921489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:36.693 [2024-10-27 11:44:21.921497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:36.693 [2024-10-27 11:44:21.921503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.693 [2024-10-27 11:44:21.921512] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:36.693 [2024-10-27 11:44:21.921522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:36.693 [2024-10-27 11:44:21.921529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:36.693 [2024-10-27 11:44:21.921536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:36.693 [2024-10-27 11:44:21.921541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:36.693 [2024-10-27 11:44:21.921548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:36.693 [2024-10-27 11:44:21.921554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:36.693 [2024-10-27 11:44:21.921559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:36.693 [2024-10-27 11:44:21.921568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:36.693 [2024-10-27 11:44:21.921573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.921999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.922004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.922010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.922015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.922021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.922028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.922034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.922040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.922046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.922052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.922057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:36.694 [2024-10-27 11:44:21.922063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:36.695 [2024-10-27 11:44:21.922069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:36.695 [2024-10-27 11:44:21.922075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:36.695 [2024-10-27 11:44:21.922081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:36.695 [2024-10-27 11:44:21.922086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:36.695 [2024-10-27 11:44:21.922092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:36.695 [2024-10-27 11:44:21.922097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:36.695 [2024-10-27 11:44:21.922103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:36.695 [2024-10-27 11:44:21.922109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:36.695 [2024-10-27 11:44:21.922120] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:36.695 [2024-10-27 11:44:21.922127] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab 00:29:36.695 [2024-10-27 11:44:21.922133] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:36.695 [2024-10-27 11:44:21.922139] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:36.695 [2024-10-27 11:44:21.922144] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:36.695 [2024-10-27 11:44:21.922150] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:36.695 [2024-10-27 11:44:21.922156] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:36.695 [2024-10-27 11:44:21.922164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:36.695 [2024-10-27 11:44:21.922170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:36.695 [2024-10-27 11:44:21.922175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:36.695 [2024-10-27 11:44:21.922180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:36.695 [2024-10-27 11:44:21.922185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.695 [2024-10-27 11:44:21.922190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:36.695 [2024-10-27 11:44:21.922197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:29:36.695 [2024-10-27 11:44:21.922203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.695 [2024-10-27 11:44:21.931823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.695 [2024-10-27 11:44:21.931848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:36.695 [2024-10-27 11:44:21.931860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.609 ms 00:29:36.695 [2024-10-27 11:44:21.931866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.695 [2024-10-27 11:44:21.932132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.695 [2024-10-27 11:44:21.932150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:36.695 [2024-10-27 11:44:21.932156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:29:36.695 [2024-10-27 11:44:21.932162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.695 [2024-10-27 11:44:21.957900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.695 [2024-10-27 11:44:21.957926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:36.695 [2024-10-27 11:44:21.957937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.695 [2024-10-27 11:44:21.957943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.695 [2024-10-27 11:44:21.957985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.695 [2024-10-27 11:44:21.957991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:36.695 [2024-10-27 11:44:21.957997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.695 [2024-10-27 11:44:21.958003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.695 [2024-10-27 11:44:21.958047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.695 [2024-10-27 11:44:21.958055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:36.695 [2024-10-27 11:44:21.958061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.695 [2024-10-27 11:44:21.958069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.695 [2024-10-27 11:44:21.958080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.695 [2024-10-27 11:44:21.958086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:36.695 [2024-10-27 11:44:21.958093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.695 [2024-10-27 11:44:21.958098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.956 [2024-10-27 11:44:22.017819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.956 [2024-10-27 11:44:22.017848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:36.957 [2024-10-27 11:44:22.017859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.957 [2024-10-27 11:44:22.017864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.957 [2024-10-27 11:44:22.066737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.957 [2024-10-27 11:44:22.066767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:36.957 [2024-10-27 11:44:22.066776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.957 [2024-10-27 11:44:22.066782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.957 [2024-10-27 11:44:22.066817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.957 [2024-10-27 11:44:22.066824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:36.957 [2024-10-27 11:44:22.066831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.957 [2024-10-27 11:44:22.066837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.957 [2024-10-27 11:44:22.066878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.957 [2024-10-27 11:44:22.066885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:36.957 [2024-10-27 11:44:22.066891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.957 [2024-10-27 11:44:22.066897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.957 [2024-10-27 11:44:22.066954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.957 [2024-10-27 11:44:22.066963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:36.957 [2024-10-27 11:44:22.066970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.957 [2024-10-27 11:44:22.066975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.957 [2024-10-27 11:44:22.067001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.957 [2024-10-27 11:44:22.067009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:36.957 [2024-10-27 11:44:22.067014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.957 [2024-10-27 11:44:22.067021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.957 [2024-10-27 11:44:22.067047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.957 [2024-10-27 11:44:22.067054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:36.957 [2024-10-27 11:44:22.067060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.957 [2024-10-27 11:44:22.067065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.957 [2024-10-27 11:44:22.067097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:36.957 [2024-10-27 11:44:22.067105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:36.957 [2024-10-27 11:44:22.067111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:36.957 [2024-10-27 11:44:22.067117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.957 [2024-10-27 11:44:22.067203] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 149.899 ms, result 0 00:29:37.900 00:29:37.900 00:29:37.900 11:44:22 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:37.900 [2024-10-27 11:44:22.939132] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:29:37.900 [2024-10-27 11:44:22.939699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81863 ] 00:29:37.900 [2024-10-27 11:44:23.095547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.900 [2024-10-27 11:44:23.169051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.210 [2024-10-27 11:44:23.374511] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:38.210 [2024-10-27 11:44:23.374560] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:38.499 [2024-10-27 11:44:23.528808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.499 [2024-10-27 11:44:23.528843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:38.499 [2024-10-27 11:44:23.528856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:38.499 [2024-10-27 11:44:23.528862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.499 [2024-10-27 11:44:23.528898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.499 [2024-10-27 11:44:23.528906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:38.499 [2024-10-27 11:44:23.528913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:38.499 [2024-10-27 11:44:23.528919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.499 [2024-10-27 11:44:23.528932] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:38.499 [2024-10-27 11:44:23.529490] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:38.499 [2024-10-27 11:44:23.529511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.499 [2024-10-27 11:44:23.529518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:38.500 [2024-10-27 11:44:23.529525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:29:38.500 [2024-10-27 11:44:23.529531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.500 [2024-10-27 11:44:23.529720] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:38.500 [2024-10-27 11:44:23.529738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.500 [2024-10-27 11:44:23.529745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:38.500 [2024-10-27 11:44:23.529754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:38.500 [2024-10-27 11:44:23.529760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.500 [2024-10-27 11:44:23.529793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.500 [2024-10-27 11:44:23.529800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:38.500 [2024-10-27 11:44:23.529807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:38.500 [2024-10-27 11:44:23.529812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.500 [2024-10-27 11:44:23.530012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.500 [2024-10-27 11:44:23.530021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:38.500 [2024-10-27 11:44:23.530029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:29:38.500 [2024-10-27 11:44:23.530034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.500 [2024-10-27 11:44:23.530111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.500 [2024-10-27 11:44:23.530119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:38.500 [2024-10-27 11:44:23.530125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:38.500 [2024-10-27 11:44:23.530131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.500 [2024-10-27 11:44:23.530147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.500 [2024-10-27 11:44:23.530154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:38.500 [2024-10-27 11:44:23.530160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:38.500 [2024-10-27 11:44:23.530168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.500 [2024-10-27 11:44:23.530181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:38.500 [2024-10-27 11:44:23.533020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.500 [2024-10-27 11:44:23.533047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:38.500 [2024-10-27 11:44:23.533054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.842 ms 00:29:38.500 [2024-10-27 11:44:23.533060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.500 [2024-10-27 11:44:23.533085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.500 [2024-10-27 11:44:23.533091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:38.500 [2024-10-27 11:44:23.533097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:38.500 [2024-10-27 11:44:23.533103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.500 [2024-10-27 11:44:23.533132] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:38.500 [2024-10-27 11:44:23.533148] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:38.500 [2024-10-27 11:44:23.533176] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:38.500 [2024-10-27 11:44:23.533188] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:38.500 [2024-10-27 11:44:23.533277] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:38.500 [2024-10-27 11:44:23.533287] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:38.500 [2024-10-27 11:44:23.533304] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:38.500 [2024-10-27 11:44:23.533313] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:38.500 [2024-10-27 11:44:23.533319] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:38.500 [2024-10-27 11:44:23.533326] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:38.500 [2024-10-27 11:44:23.533332] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:38.500 [2024-10-27 11:44:23.533339] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:38.500 [2024-10-27 11:44:23.533345] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:38.500 [2024-10-27 11:44:23.533351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.500 [2024-10-27 11:44:23.533356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:38.500 [2024-10-27 11:44:23.533362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:29:38.500 [2024-10-27 11:44:23.533367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.500 [2024-10-27 11:44:23.533430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.500 [2024-10-27 11:44:23.533436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:38.500 [2024-10-27 11:44:23.533442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:38.500 [2024-10-27 11:44:23.533447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.500 [2024-10-27 11:44:23.533524] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:38.500 [2024-10-27 11:44:23.533532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:38.500 [2024-10-27 11:44:23.533538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:38.500 [2024-10-27 11:44:23.533544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.500 [2024-10-27 11:44:23.533551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:38.500 [2024-10-27 11:44:23.533557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:38.500 [2024-10-27 11:44:23.533562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:38.500 [2024-10-27 11:44:23.533568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:38.500 [2024-10-27 11:44:23.533574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:38.500 [2024-10-27 11:44:23.533579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:38.500 [2024-10-27 11:44:23.533584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:38.500 [2024-10-27 11:44:23.533589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:38.500 [2024-10-27 11:44:23.533594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:38.500 [2024-10-27 11:44:23.533599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:38.500 [2024-10-27 11:44:23.533605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:38.500 [2024-10-27 11:44:23.533610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.500 [2024-10-27 11:44:23.533615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:38.500 [2024-10-27 11:44:23.533624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:38.500 [2024-10-27 11:44:23.533629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.500 [2024-10-27 11:44:23.533635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:38.500 [2024-10-27 11:44:23.533640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:38.500 [2024-10-27 11:44:23.533644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:38.500 [2024-10-27 11:44:23.533649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:38.500 [2024-10-27 11:44:23.533655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:38.500 [2024-10-27 11:44:23.533660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:38.500 [2024-10-27 11:44:23.533665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:38.500 [2024-10-27 11:44:23.533671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:38.500 [2024-10-27 11:44:23.533676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:38.500 [2024-10-27 11:44:23.533681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:38.500 [2024-10-27 11:44:23.533686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:38.500 [2024-10-27 11:44:23.533691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:38.500 [2024-10-27 11:44:23.533698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:38.500 [2024-10-27 11:44:23.533703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:38.500 [2024-10-27 11:44:23.533708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:38.500 [2024-10-27 11:44:23.533713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:38.500 [2024-10-27 11:44:23.533718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:38.500 [2024-10-27 11:44:23.533724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:38.500 [2024-10-27 11:44:23.533728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:38.501 [2024-10-27 11:44:23.533733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:38.501 [2024-10-27 11:44:23.533739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.501 [2024-10-27 11:44:23.533744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:38.501 [2024-10-27 11:44:23.533749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:38.501 [2024-10-27 11:44:23.533754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.501 [2024-10-27 11:44:23.533760] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:38.501 [2024-10-27 11:44:23.533766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:38.501 [2024-10-27 11:44:23.533771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:38.501 [2024-10-27 11:44:23.533776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:38.501 [2024-10-27 11:44:23.533782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:38.501 [2024-10-27 11:44:23.533787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:38.501 [2024-10-27 11:44:23.533792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:38.501 [2024-10-27 11:44:23.533797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:38.501 [2024-10-27 11:44:23.533802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:38.501 [2024-10-27 11:44:23.533807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:38.501 [2024-10-27 11:44:23.533814] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:38.501 [2024-10-27 11:44:23.533820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:38.501 [2024-10-27 11:44:23.533828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:38.501 [2024-10-27 11:44:23.533834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:38.501 [2024-10-27 11:44:23.533839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:38.501 [2024-10-27 11:44:23.533844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:38.501 [2024-10-27 11:44:23.533849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:38.501 [2024-10-27 11:44:23.533855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:38.501 [2024-10-27 11:44:23.533861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:38.501 [2024-10-27 11:44:23.533866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:38.501 [2024-10-27 11:44:23.533871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:38.501 [2024-10-27 11:44:23.533876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:38.501 [2024-10-27 11:44:23.533881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:38.501 [2024-10-27 11:44:23.533887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:38.501 [2024-10-27 11:44:23.533893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:38.501 [2024-10-27 11:44:23.533899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:38.501 [2024-10-27 11:44:23.533904] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:38.501 [2024-10-27 11:44:23.533910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:38.501 [2024-10-27 11:44:23.533917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:38.501 [2024-10-27 11:44:23.533923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:38.501 [2024-10-27 11:44:23.533929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:38.501 [2024-10-27 11:44:23.533934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:38.501 [2024-10-27 11:44:23.533940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.533945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:38.501 [2024-10-27 11:44:23.533950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:29:38.501 [2024-10-27 11:44:23.533956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.552477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.552503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:38.501 [2024-10-27 11:44:23.552511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.491 ms 00:29:38.501 [2024-10-27 11:44:23.552517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.552575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.552581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:38.501 [2024-10-27 11:44:23.552587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:38.501 [2024-10-27 11:44:23.552595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.592887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.592918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:38.501 [2024-10-27 11:44:23.592927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.254 ms 00:29:38.501 [2024-10-27 11:44:23.592934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.592962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.592972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:38.501 [2024-10-27 11:44:23.592979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:38.501 [2024-10-27 11:44:23.592985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.593057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.593066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:38.501 [2024-10-27 11:44:23.593072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:38.501 [2024-10-27 11:44:23.593079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.593168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.593176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:38.501 [2024-10-27 11:44:23.593184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:29:38.501 [2024-10-27 11:44:23.593189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.603765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.603791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:38.501 [2024-10-27 11:44:23.603798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.563 ms 00:29:38.501 [2024-10-27 11:44:23.603804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.603889] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:38.501 [2024-10-27 11:44:23.603902] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:38.501 [2024-10-27 11:44:23.603909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.603915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:38.501 [2024-10-27 11:44:23.603922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:38.501 [2024-10-27 11:44:23.603929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.613070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.613092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:38.501 [2024-10-27 11:44:23.613101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.129 ms 00:29:38.501 [2024-10-27 11:44:23.613107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.613193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.613200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:38.501 [2024-10-27 11:44:23.613206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:38.501 [2024-10-27 11:44:23.613211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.613241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.501 [2024-10-27 11:44:23.613252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:38.501 [2024-10-27 11:44:23.613258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:38.501 [2024-10-27 11:44:23.613264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.501 [2024-10-27 11:44:23.613716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.502 [2024-10-27 11:44:23.613731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:38.502 [2024-10-27 11:44:23.613738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:29:38.502 [2024-10-27 11:44:23.613744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.502 [2024-10-27 11:44:23.613755] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:38.502 [2024-10-27 11:44:23.613762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.502 [2024-10-27 11:44:23.613772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:38.502 [2024-10-27 11:44:23.613779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:38.502 [2024-10-27 11:44:23.613784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.502 [2024-10-27 11:44:23.622373] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:38.502 [2024-10-27 11:44:23.622474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.502 [2024-10-27 11:44:23.622481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:38.502 [2024-10-27 11:44:23.622488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.675 ms 00:29:38.502 [2024-10-27 11:44:23.622493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.502 [2024-10-27 11:44:23.624092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.502 [2024-10-27 11:44:23.624111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:38.502 [2024-10-27 11:44:23.624121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:29:38.502 [2024-10-27 11:44:23.624127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.502 [2024-10-27 11:44:23.624196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.502 [2024-10-27 11:44:23.624204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:38.502 [2024-10-27 11:44:23.624211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:38.502 [2024-10-27 11:44:23.624217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.502 [2024-10-27 11:44:23.624233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.502 [2024-10-27 11:44:23.624239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:38.502 [2024-10-27 11:44:23.624249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:38.502 [2024-10-27 11:44:23.624255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.502 [2024-10-27 11:44:23.624276] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:38.502 [2024-10-27 11:44:23.624283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.502 [2024-10-27 11:44:23.624289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:38.502 [2024-10-27 11:44:23.624308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:38.502 [2024-10-27 11:44:23.624314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.502 [2024-10-27 11:44:23.642744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.502 [2024-10-27 11:44:23.642774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:38.502 [2024-10-27 11:44:23.642783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.415 ms 00:29:38.502 [2024-10-27 11:44:23.642790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.502 [2024-10-27 11:44:23.642843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.502 [2024-10-27 11:44:23.642851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:38.502 [2024-10-27 11:44:23.642857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:38.502 [2024-10-27 11:44:23.642863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.502 [2024-10-27 11:44:23.643643] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 114.510 ms, result 0 00:29:39.888  [2024-10-27T11:44:26.115Z] Copying: 11/1024 [MB] (11 MBps) [2024-10-27T11:44:27.059Z] Copying: 32/1024 [MB] (20 MBps) [2024-10-27T11:44:28.003Z] Copying: 53/1024 [MB] (21 MBps) [2024-10-27T11:44:28.949Z] Copying: 72/1024 [MB] (18 MBps) [2024-10-27T11:44:29.892Z] Copying: 82/1024 [MB] (10 MBps) [2024-10-27T11:44:30.834Z] Copying: 96/1024 [MB] (13 MBps) [2024-10-27T11:44:32.220Z] Copying: 123/1024 [MB] (27 MBps) [2024-10-27T11:44:32.803Z] Copying: 145/1024 [MB] (21 MBps) [2024-10-27T11:44:34.192Z] Copying: 166/1024 [MB] (20 MBps) [2024-10-27T11:44:35.136Z] Copying: 186/1024 [MB] (19 MBps) [2024-10-27T11:44:36.074Z] Copying: 203/1024 [MB] (17 MBps) [2024-10-27T11:44:37.014Z] Copying: 220/1024 [MB] (16 MBps) [2024-10-27T11:44:37.957Z] Copying: 237/1024 [MB] (17 MBps) [2024-10-27T11:44:38.903Z] Copying: 255/1024 [MB] (17 MBps) [2024-10-27T11:44:39.847Z] Copying: 268/1024 [MB] (13 MBps) [2024-10-27T11:44:40.791Z] Copying: 281/1024 [MB] (12 MBps) [2024-10-27T11:44:42.174Z] Copying: 292/1024 [MB] (10 MBps) [2024-10-27T11:44:43.118Z] Copying: 302/1024 [MB] (10 MBps) [2024-10-27T11:44:44.063Z] Copying: 313/1024 [MB] (10 MBps) [2024-10-27T11:44:45.008Z] Copying: 324/1024 [MB] (10 MBps) [2024-10-27T11:44:45.954Z] Copying: 334/1024 [MB] (10 MBps) [2024-10-27T11:44:46.901Z] Copying: 344/1024 [MB] (10 MBps) [2024-10-27T11:44:47.846Z] Copying: 355/1024 [MB] (10 MBps) [2024-10-27T11:44:48.788Z] Copying: 366/1024 [MB] (10 MBps) [2024-10-27T11:44:50.175Z] Copying: 376/1024 [MB] (10 MBps) [2024-10-27T11:44:51.117Z] Copying: 394/1024 [MB] (17 MBps) [2024-10-27T11:44:52.059Z] Copying: 405/1024 [MB] (11 MBps) [2024-10-27T11:44:53.067Z] Copying: 424/1024 [MB] (18 MBps) [2024-10-27T11:44:54.047Z] Copying: 434/1024 [MB] (10 MBps) [2024-10-27T11:44:54.992Z] Copying: 445/1024 [MB] (10 MBps) [2024-10-27T11:44:55.935Z] Copying: 456/1024 [MB] (10 MBps) [2024-10-27T11:44:56.880Z] Copying: 466/1024 [MB] (10 MBps) [2024-10-27T11:44:57.823Z] Copying: 477/1024 [MB] (10 MBps) [2024-10-27T11:44:59.209Z] Copying: 488/1024 [MB] (10 MBps) [2024-10-27T11:44:59.786Z] Copying: 498/1024 [MB] (10 MBps) [2024-10-27T11:45:01.178Z] Copying: 514/1024 [MB] (15 MBps) [2024-10-27T11:45:02.121Z] Copying: 533/1024 [MB] (18 MBps) [2024-10-27T11:45:03.062Z] Copying: 549/1024 [MB] (16 MBps) [2024-10-27T11:45:04.005Z] Copying: 560/1024 [MB] (11 MBps) [2024-10-27T11:45:04.946Z] Copying: 573/1024 [MB] (12 MBps) [2024-10-27T11:45:05.887Z] Copying: 583/1024 [MB] (10 MBps) [2024-10-27T11:45:06.828Z] Copying: 602/1024 [MB] (18 MBps) [2024-10-27T11:45:08.214Z] Copying: 613/1024 [MB] (11 MBps) [2024-10-27T11:45:08.784Z] Copying: 631/1024 [MB] (17 MBps) [2024-10-27T11:45:10.169Z] Copying: 648/1024 [MB] (17 MBps) [2024-10-27T11:45:11.111Z] Copying: 667/1024 [MB] (18 MBps) [2024-10-27T11:45:12.055Z] Copying: 678/1024 [MB] (11 MBps) [2024-10-27T11:45:12.998Z] Copying: 689/1024 [MB] (11 MBps) [2024-10-27T11:45:13.941Z] Copying: 706/1024 [MB] (17 MBps) [2024-10-27T11:45:14.884Z] Copying: 717/1024 [MB] (10 MBps) [2024-10-27T11:45:15.828Z] Copying: 731/1024 [MB] (14 MBps) [2024-10-27T11:45:17.218Z] Copying: 753/1024 [MB] (21 MBps) [2024-10-27T11:45:17.790Z] Copying: 768/1024 [MB] (15 MBps) [2024-10-27T11:45:19.174Z] Copying: 783/1024 [MB] (14 MBps) [2024-10-27T11:45:20.116Z] Copying: 799/1024 [MB] (15 MBps) [2024-10-27T11:45:21.063Z] Copying: 813/1024 [MB] (14 MBps) [2024-10-27T11:45:22.123Z] Copying: 828/1024 [MB] (15 MBps) [2024-10-27T11:45:23.068Z] Copying: 842/1024 [MB] (13 MBps) [2024-10-27T11:45:24.014Z] Copying: 863/1024 [MB] (21 MBps) [2024-10-27T11:45:24.959Z] Copying: 880/1024 [MB] (17 MBps) [2024-10-27T11:45:25.904Z] Copying: 897/1024 [MB] (17 MBps) [2024-10-27T11:45:26.847Z] Copying: 919/1024 [MB] (21 MBps) [2024-10-27T11:45:27.794Z] Copying: 942/1024 [MB] (23 MBps) [2024-10-27T11:45:29.183Z] Copying: 953/1024 [MB] (10 MBps) [2024-10-27T11:45:30.127Z] Copying: 966/1024 [MB] (12 MBps) [2024-10-27T11:45:31.072Z] Copying: 978/1024 [MB] (12 MBps) [2024-10-27T11:45:32.016Z] Copying: 989/1024 [MB] (10 MBps) [2024-10-27T11:45:32.961Z] Copying: 1004/1024 [MB] (15 MBps) [2024-10-27T11:45:33.222Z] Copying: 1015/1024 [MB] (10 MBps) [2024-10-27T11:45:33.485Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-10-27 11:45:33.221190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.204 [2024-10-27 11:45:33.221289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:48.204 [2024-10-27 11:45:33.221340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:48.204 [2024-10-27 11:45:33.221352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.204 [2024-10-27 11:45:33.221380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:48.204 [2024-10-27 11:45:33.225712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.204 [2024-10-27 11:45:33.225751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:48.204 [2024-10-27 11:45:33.225765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.312 ms 00:30:48.204 [2024-10-27 11:45:33.225776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.204 [2024-10-27 11:45:33.226062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.204 [2024-10-27 11:45:33.226074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:48.204 [2024-10-27 11:45:33.226084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:30:48.204 [2024-10-27 11:45:33.226095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.204 [2024-10-27 11:45:33.226130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.204 [2024-10-27 11:45:33.226142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:48.204 [2024-10-27 11:45:33.226156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:48.204 [2024-10-27 11:45:33.226165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.204 [2024-10-27 11:45:33.226228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.204 [2024-10-27 11:45:33.226240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:48.204 [2024-10-27 11:45:33.226250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:48.204 [2024-10-27 11:45:33.226260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.204 [2024-10-27 11:45:33.226277] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:48.204 [2024-10-27 11:45:33.226304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:48.204 [2024-10-27 11:45:33.226834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.226998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:48.205 [2024-10-27 11:45:33.227338] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:48.205 [2024-10-27 11:45:33.227348] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab 00:30:48.205 [2024-10-27 11:45:33.227361] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:48.205 [2024-10-27 11:45:33.227370] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:48.205 [2024-10-27 11:45:33.227379] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:48.205 [2024-10-27 11:45:33.227389] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:48.205 [2024-10-27 11:45:33.227398] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:48.205 [2024-10-27 11:45:33.227408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:48.205 [2024-10-27 11:45:33.227417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:48.205 [2024-10-27 11:45:33.227425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:48.205 [2024-10-27 11:45:33.227433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:48.205 [2024-10-27 11:45:33.227443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.205 [2024-10-27 11:45:33.227453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:48.205 [2024-10-27 11:45:33.227463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:30:48.205 [2024-10-27 11:45:33.227472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.205 [2024-10-27 11:45:33.242409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.205 [2024-10-27 11:45:33.242444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:48.205 [2024-10-27 11:45:33.242455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.919 ms 00:30:48.205 [2024-10-27 11:45:33.242464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.205 [2024-10-27 11:45:33.242829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.205 [2024-10-27 11:45:33.242843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:48.205 [2024-10-27 11:45:33.242852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:30:48.205 [2024-10-27 11:45:33.242860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.205 [2024-10-27 11:45:33.278249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.205 [2024-10-27 11:45:33.278286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:48.205 [2024-10-27 11:45:33.278310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.205 [2024-10-27 11:45:33.278320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.205 [2024-10-27 11:45:33.278390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.205 [2024-10-27 11:45:33.278400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:48.205 [2024-10-27 11:45:33.278409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.205 [2024-10-27 11:45:33.278419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.205 [2024-10-27 11:45:33.278474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.205 [2024-10-27 11:45:33.278485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:48.205 [2024-10-27 11:45:33.278495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.205 [2024-10-27 11:45:33.278504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.205 [2024-10-27 11:45:33.278521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.205 [2024-10-27 11:45:33.278531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:48.205 [2024-10-27 11:45:33.278541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.205 [2024-10-27 11:45:33.278549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.205 [2024-10-27 11:45:33.364243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.205 [2024-10-27 11:45:33.364321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:48.205 [2024-10-27 11:45:33.364337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.205 [2024-10-27 11:45:33.364347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.205 [2024-10-27 11:45:33.438479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.205 [2024-10-27 11:45:33.438546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:48.205 [2024-10-27 11:45:33.438559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.205 [2024-10-27 11:45:33.438569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.205 [2024-10-27 11:45:33.438680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.205 [2024-10-27 11:45:33.438692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:48.205 [2024-10-27 11:45:33.438701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.205 [2024-10-27 11:45:33.438711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.205 [2024-10-27 11:45:33.438751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.205 [2024-10-27 11:45:33.438768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:48.205 [2024-10-27 11:45:33.438778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.205 [2024-10-27 11:45:33.438786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.205 [2024-10-27 11:45:33.438882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.205 [2024-10-27 11:45:33.438906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:48.206 [2024-10-27 11:45:33.438916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.206 [2024-10-27 11:45:33.438924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.206 [2024-10-27 11:45:33.438953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.206 [2024-10-27 11:45:33.438964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:48.206 [2024-10-27 11:45:33.438973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.206 [2024-10-27 11:45:33.438983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.206 [2024-10-27 11:45:33.439034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.206 [2024-10-27 11:45:33.439056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:48.206 [2024-10-27 11:45:33.439065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.206 [2024-10-27 11:45:33.439074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.206 [2024-10-27 11:45:33.439130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.206 [2024-10-27 11:45:33.439141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:48.206 [2024-10-27 11:45:33.439150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.206 [2024-10-27 11:45:33.439158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.206 [2024-10-27 11:45:33.439336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 218.083 ms, result 0 00:30:49.149 00:30:49.149 00:30:49.149 11:45:34 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:51.695 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:51.695 11:45:36 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:51.695 [2024-10-27 11:45:36.538107] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:30:51.695 [2024-10-27 11:45:36.538469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82600 ] 00:30:51.695 [2024-10-27 11:45:36.696185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.695 [2024-10-27 11:45:36.808163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.957 [2024-10-27 11:45:37.093887] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:51.957 [2024-10-27 11:45:37.093961] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:52.220 [2024-10-27 11:45:37.255012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.255067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:52.220 [2024-10-27 11:45:37.255086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:52.220 [2024-10-27 11:45:37.255094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.255148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.255159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:52.220 [2024-10-27 11:45:37.255171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:52.220 [2024-10-27 11:45:37.255179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.255200] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:52.220 [2024-10-27 11:45:37.255901] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:52.220 [2024-10-27 11:45:37.255923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.255931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:52.220 [2024-10-27 11:45:37.255940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:30:52.220 [2024-10-27 11:45:37.255949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.256227] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:52.220 [2024-10-27 11:45:37.256252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.256261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:52.220 [2024-10-27 11:45:37.256273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:52.220 [2024-10-27 11:45:37.256281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.256357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.256368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:52.220 [2024-10-27 11:45:37.256377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:52.220 [2024-10-27 11:45:37.256384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.256704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.256717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:52.220 [2024-10-27 11:45:37.256729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:30:52.220 [2024-10-27 11:45:37.256737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.256806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.256816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:52.220 [2024-10-27 11:45:37.256824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:52.220 [2024-10-27 11:45:37.256831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.256853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.256862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:52.220 [2024-10-27 11:45:37.256870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:52.220 [2024-10-27 11:45:37.256881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.256900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:52.220 [2024-10-27 11:45:37.261082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.261119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:52.220 [2024-10-27 11:45:37.261129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.186 ms 00:30:52.220 [2024-10-27 11:45:37.261136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.261177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.261185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:52.220 [2024-10-27 11:45:37.261194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:52.220 [2024-10-27 11:45:37.261201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.261254] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:52.220 [2024-10-27 11:45:37.261308] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:52.220 [2024-10-27 11:45:37.261347] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:52.220 [2024-10-27 11:45:37.261363] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:52.220 [2024-10-27 11:45:37.261467] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:52.220 [2024-10-27 11:45:37.261477] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:52.220 [2024-10-27 11:45:37.261488] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:52.220 [2024-10-27 11:45:37.261499] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:52.220 [2024-10-27 11:45:37.261508] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:52.220 [2024-10-27 11:45:37.261516] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:52.220 [2024-10-27 11:45:37.261524] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:52.220 [2024-10-27 11:45:37.261534] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:52.220 [2024-10-27 11:45:37.261542] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:52.220 [2024-10-27 11:45:37.261549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.261557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:52.220 [2024-10-27 11:45:37.261565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:30:52.220 [2024-10-27 11:45:37.261573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.261654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.220 [2024-10-27 11:45:37.261662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:52.220 [2024-10-27 11:45:37.261670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:52.220 [2024-10-27 11:45:37.261677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.220 [2024-10-27 11:45:37.261782] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:52.220 [2024-10-27 11:45:37.261792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:52.220 [2024-10-27 11:45:37.261800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:52.220 [2024-10-27 11:45:37.261808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.220 [2024-10-27 11:45:37.261819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:52.220 [2024-10-27 11:45:37.261827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:52.220 [2024-10-27 11:45:37.261836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:52.220 [2024-10-27 11:45:37.261843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:52.221 [2024-10-27 11:45:37.261851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:52.221 [2024-10-27 11:45:37.261857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:52.221 [2024-10-27 11:45:37.261865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:52.221 [2024-10-27 11:45:37.261872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:52.221 [2024-10-27 11:45:37.261879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:52.221 [2024-10-27 11:45:37.261885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:52.221 [2024-10-27 11:45:37.261893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:52.221 [2024-10-27 11:45:37.261899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.221 [2024-10-27 11:45:37.261907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:52.221 [2024-10-27 11:45:37.261919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:52.221 [2024-10-27 11:45:37.261926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.221 [2024-10-27 11:45:37.261934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:52.221 [2024-10-27 11:45:37.261941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:52.221 [2024-10-27 11:45:37.261949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:52.221 [2024-10-27 11:45:37.261955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:52.221 [2024-10-27 11:45:37.261962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:52.221 [2024-10-27 11:45:37.261971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:52.221 [2024-10-27 11:45:37.261977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:52.221 [2024-10-27 11:45:37.261985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:52.221 [2024-10-27 11:45:37.261991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:52.221 [2024-10-27 11:45:37.261998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:52.221 [2024-10-27 11:45:37.262005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:52.221 [2024-10-27 11:45:37.262012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:52.221 [2024-10-27 11:45:37.262018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:52.221 [2024-10-27 11:45:37.262025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:52.221 [2024-10-27 11:45:37.262031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:52.221 [2024-10-27 11:45:37.262038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:52.221 [2024-10-27 11:45:37.262045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:52.221 [2024-10-27 11:45:37.262054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:52.221 [2024-10-27 11:45:37.262060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:52.221 [2024-10-27 11:45:37.262067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:52.221 [2024-10-27 11:45:37.262074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.221 [2024-10-27 11:45:37.262080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:52.221 [2024-10-27 11:45:37.262087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:52.221 [2024-10-27 11:45:37.262093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.221 [2024-10-27 11:45:37.262099] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:52.221 [2024-10-27 11:45:37.262107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:52.221 [2024-10-27 11:45:37.262114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:52.221 [2024-10-27 11:45:37.262121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:52.221 [2024-10-27 11:45:37.262129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:52.221 [2024-10-27 11:45:37.262135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:52.221 [2024-10-27 11:45:37.262142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:52.221 [2024-10-27 11:45:37.262149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:52.221 [2024-10-27 11:45:37.262155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:52.221 [2024-10-27 11:45:37.262161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:52.221 [2024-10-27 11:45:37.262169] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:52.221 [2024-10-27 11:45:37.262178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:52.221 [2024-10-27 11:45:37.262189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:52.221 [2024-10-27 11:45:37.262197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:52.221 [2024-10-27 11:45:37.262204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:52.221 [2024-10-27 11:45:37.262211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:52.221 [2024-10-27 11:45:37.262218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:52.221 [2024-10-27 11:45:37.262227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:52.221 [2024-10-27 11:45:37.262234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:52.221 [2024-10-27 11:45:37.262241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:52.221 [2024-10-27 11:45:37.262248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:52.221 [2024-10-27 11:45:37.262255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:52.221 [2024-10-27 11:45:37.262263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:52.221 [2024-10-27 11:45:37.262270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:52.221 [2024-10-27 11:45:37.262277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:52.221 [2024-10-27 11:45:37.262286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:52.221 [2024-10-27 11:45:37.262308] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:52.221 [2024-10-27 11:45:37.262317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:52.221 [2024-10-27 11:45:37.262325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:52.221 [2024-10-27 11:45:37.262333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:52.221 [2024-10-27 11:45:37.262341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:52.221 [2024-10-27 11:45:37.262348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:52.221 [2024-10-27 11:45:37.262356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.221 [2024-10-27 11:45:37.262363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:52.221 [2024-10-27 11:45:37.262372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:30:52.221 [2024-10-27 11:45:37.262380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.221 [2024-10-27 11:45:37.289847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.221 [2024-10-27 11:45:37.289884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:52.221 [2024-10-27 11:45:37.289895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.423 ms 00:30:52.221 [2024-10-27 11:45:37.289903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.221 [2024-10-27 11:45:37.289986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.221 [2024-10-27 11:45:37.289995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:52.221 [2024-10-27 11:45:37.290004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:52.221 [2024-10-27 11:45:37.290015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.221 [2024-10-27 11:45:37.334359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.221 [2024-10-27 11:45:37.334406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:52.221 [2024-10-27 11:45:37.334420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.292 ms 00:30:52.221 [2024-10-27 11:45:37.334429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.221 [2024-10-27 11:45:37.334475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.221 [2024-10-27 11:45:37.334489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:52.221 [2024-10-27 11:45:37.334498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:52.221 [2024-10-27 11:45:37.334506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.221 [2024-10-27 11:45:37.334614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.221 [2024-10-27 11:45:37.334626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:52.221 [2024-10-27 11:45:37.334635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:52.221 [2024-10-27 11:45:37.334642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.221 [2024-10-27 11:45:37.334772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.221 [2024-10-27 11:45:37.334781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:52.221 [2024-10-27 11:45:37.334792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:30:52.221 [2024-10-27 11:45:37.334800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.221 [2024-10-27 11:45:37.350476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.221 [2024-10-27 11:45:37.350515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:52.221 [2024-10-27 11:45:37.350526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.657 ms 00:30:52.221 [2024-10-27 11:45:37.350533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.221 [2024-10-27 11:45:37.350681] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:52.221 [2024-10-27 11:45:37.350694] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:52.221 [2024-10-27 11:45:37.350704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.350712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:52.222 [2024-10-27 11:45:37.350724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:52.222 [2024-10-27 11:45:37.350731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.363218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.363254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:52.222 [2024-10-27 11:45:37.363265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.470 ms 00:30:52.222 [2024-10-27 11:45:37.363273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.363411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.363422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:52.222 [2024-10-27 11:45:37.363430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:30:52.222 [2024-10-27 11:45:37.363438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.363492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.363501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:52.222 [2024-10-27 11:45:37.363509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:52.222 [2024-10-27 11:45:37.363517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.364101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.364123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:52.222 [2024-10-27 11:45:37.364132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:30:52.222 [2024-10-27 11:45:37.364139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.364156] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:52.222 [2024-10-27 11:45:37.364168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.364177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:52.222 [2024-10-27 11:45:37.364185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:52.222 [2024-10-27 11:45:37.364192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.376747] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:52.222 [2024-10-27 11:45:37.376897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.376908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:52.222 [2024-10-27 11:45:37.376918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.687 ms 00:30:52.222 [2024-10-27 11:45:37.376926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.379108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.379134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:52.222 [2024-10-27 11:45:37.379146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:30:52.222 [2024-10-27 11:45:37.379154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.379242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.379253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:52.222 [2024-10-27 11:45:37.379261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:52.222 [2024-10-27 11:45:37.379269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.379306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.379317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:52.222 [2024-10-27 11:45:37.379329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:52.222 [2024-10-27 11:45:37.379337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.379367] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:52.222 [2024-10-27 11:45:37.379377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.379385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:52.222 [2024-10-27 11:45:37.379393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:52.222 [2024-10-27 11:45:37.379400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.405520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.405587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:52.222 [2024-10-27 11:45:37.405600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.098 ms 00:30:52.222 [2024-10-27 11:45:37.405608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.405694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.222 [2024-10-27 11:45:37.405705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:52.222 [2024-10-27 11:45:37.405714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:52.222 [2024-10-27 11:45:37.405722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.222 [2024-10-27 11:45:37.406949] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.467 ms, result 0 00:30:53.167  [2024-10-27T11:45:39.843Z] Copying: 10/1024 [MB] (10 MBps) [2024-10-27T11:45:40.786Z] Copying: 44/1024 [MB] (33 MBps) [2024-10-27T11:45:41.728Z] Copying: 62/1024 [MB] (18 MBps) [2024-10-27T11:45:42.672Z] Copying: 77/1024 [MB] (15 MBps) [2024-10-27T11:45:43.617Z] Copying: 90/1024 [MB] (13 MBps) [2024-10-27T11:45:44.560Z] Copying: 102/1024 [MB] (11 MBps) [2024-10-27T11:45:45.505Z] Copying: 118/1024 [MB] (15 MBps) [2024-10-27T11:45:46.450Z] Copying: 138/1024 [MB] (20 MBps) [2024-10-27T11:45:47.838Z] Copying: 166/1024 [MB] (28 MBps) [2024-10-27T11:45:48.782Z] Copying: 201/1024 [MB] (34 MBps) [2024-10-27T11:45:49.726Z] Copying: 212/1024 [MB] (11 MBps) [2024-10-27T11:45:50.669Z] Copying: 240/1024 [MB] (28 MBps) [2024-10-27T11:45:51.615Z] Copying: 265/1024 [MB] (25 MBps) [2024-10-27T11:45:52.561Z] Copying: 279/1024 [MB] (13 MBps) [2024-10-27T11:45:53.505Z] Copying: 294/1024 [MB] (14 MBps) [2024-10-27T11:45:54.449Z] Copying: 320/1024 [MB] (25 MBps) [2024-10-27T11:45:55.830Z] Copying: 340/1024 [MB] (20 MBps) [2024-10-27T11:45:56.772Z] Copying: 365/1024 [MB] (24 MBps) [2024-10-27T11:45:57.713Z] Copying: 394/1024 [MB] (28 MBps) [2024-10-27T11:45:58.673Z] Copying: 418/1024 [MB] (24 MBps) [2024-10-27T11:45:59.698Z] Copying: 437/1024 [MB] (19 MBps) [2024-10-27T11:46:00.639Z] Copying: 448/1024 [MB] (11 MBps) [2024-10-27T11:46:01.583Z] Copying: 469/1024 [MB] (21 MBps) [2024-10-27T11:46:02.529Z] Copying: 506/1024 [MB] (36 MBps) [2024-10-27T11:46:03.474Z] Copying: 531/1024 [MB] (24 MBps) [2024-10-27T11:46:04.862Z] Copying: 553/1024 [MB] (22 MBps) [2024-10-27T11:46:05.436Z] Copying: 588/1024 [MB] (35 MBps) [2024-10-27T11:46:06.825Z] Copying: 609/1024 [MB] (20 MBps) [2024-10-27T11:46:07.769Z] Copying: 621/1024 [MB] (12 MBps) [2024-10-27T11:46:08.711Z] Copying: 635/1024 [MB] (13 MBps) [2024-10-27T11:46:09.655Z] Copying: 660/1024 [MB] (24 MBps) [2024-10-27T11:46:10.593Z] Copying: 688/1024 [MB] (28 MBps) [2024-10-27T11:46:11.536Z] Copying: 717/1024 [MB] (28 MBps) [2024-10-27T11:46:12.485Z] Copying: 729/1024 [MB] (11 MBps) [2024-10-27T11:46:13.431Z] Copying: 755/1024 [MB] (26 MBps) [2024-10-27T11:46:14.821Z] Copying: 772/1024 [MB] (16 MBps) [2024-10-27T11:46:15.766Z] Copying: 785/1024 [MB] (13 MBps) [2024-10-27T11:46:16.710Z] Copying: 803/1024 [MB] (18 MBps) [2024-10-27T11:46:17.654Z] Copying: 822/1024 [MB] (18 MBps) [2024-10-27T11:46:18.598Z] Copying: 846/1024 [MB] (23 MBps) [2024-10-27T11:46:19.539Z] Copying: 868/1024 [MB] (22 MBps) [2024-10-27T11:46:20.482Z] Copying: 892/1024 [MB] (23 MBps) [2024-10-27T11:46:21.426Z] Copying: 917/1024 [MB] (25 MBps) [2024-10-27T11:46:22.814Z] Copying: 941/1024 [MB] (23 MBps) [2024-10-27T11:46:23.757Z] Copying: 964/1024 [MB] (22 MBps) [2024-10-27T11:46:24.699Z] Copying: 994/1024 [MB] (30 MBps) [2024-10-27T11:46:25.644Z] Copying: 1020/1024 [MB] (26 MBps) [2024-10-27T11:46:25.644Z] Copying: 1048524/1048576 [kB] (3212 kBps) [2024-10-27T11:46:25.644Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-10-27 11:46:25.495581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.363 [2024-10-27 11:46:25.495675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:40.363 [2024-10-27 11:46:25.495693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:40.363 [2024-10-27 11:46:25.495702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.363 [2024-10-27 11:46:25.497035] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:40.363 [2024-10-27 11:46:25.500565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.363 [2024-10-27 11:46:25.500615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:40.363 [2024-10-27 11:46:25.500628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:31:40.363 [2024-10-27 11:46:25.500636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.363 [2024-10-27 11:46:25.513492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.363 [2024-10-27 11:46:25.513547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:40.363 [2024-10-27 11:46:25.513568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.668 ms 00:31:40.363 [2024-10-27 11:46:25.513577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.363 [2024-10-27 11:46:25.513608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.363 [2024-10-27 11:46:25.513618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:40.363 [2024-10-27 11:46:25.513627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:40.363 [2024-10-27 11:46:25.513636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.363 [2024-10-27 11:46:25.513695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.363 [2024-10-27 11:46:25.513706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:40.363 [2024-10-27 11:46:25.513715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:40.363 [2024-10-27 11:46:25.513725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.363 [2024-10-27 11:46:25.513740] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:40.363 [2024-10-27 11:46:25.513753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127744 / 261120 wr_cnt: 1 state: open 00:31:40.363 [2024-10-27 11:46:25.513763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:40.363 [2024-10-27 11:46:25.513998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:40.364 [2024-10-27 11:46:25.514579] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:40.364 [2024-10-27 11:46:25.514587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab 00:31:40.364 [2024-10-27 11:46:25.514595] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127744 00:31:40.364 [2024-10-27 11:46:25.514603] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127776 00:31:40.364 [2024-10-27 11:46:25.514610] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127744 00:31:40.364 [2024-10-27 11:46:25.514619] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:31:40.365 [2024-10-27 11:46:25.514627] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:40.365 [2024-10-27 11:46:25.514634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:40.365 [2024-10-27 11:46:25.514644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:40.365 [2024-10-27 11:46:25.514651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:40.365 [2024-10-27 11:46:25.514658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:40.365 [2024-10-27 11:46:25.514665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.365 [2024-10-27 11:46:25.514673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:40.365 [2024-10-27 11:46:25.514681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:31:40.365 [2024-10-27 11:46:25.514692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.365 [2024-10-27 11:46:25.528459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.365 [2024-10-27 11:46:25.528506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:40.365 [2024-10-27 11:46:25.528519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.750 ms 00:31:40.365 [2024-10-27 11:46:25.528527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.365 [2024-10-27 11:46:25.528939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.365 [2024-10-27 11:46:25.528975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:40.365 [2024-10-27 11:46:25.528985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:31:40.365 [2024-10-27 11:46:25.528993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.365 [2024-10-27 11:46:25.565384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.365 [2024-10-27 11:46:25.565435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:40.365 [2024-10-27 11:46:25.565449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.365 [2024-10-27 11:46:25.565457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.365 [2024-10-27 11:46:25.565523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.365 [2024-10-27 11:46:25.565532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:40.365 [2024-10-27 11:46:25.565540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.365 [2024-10-27 11:46:25.565548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.365 [2024-10-27 11:46:25.565604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.365 [2024-10-27 11:46:25.565616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:40.365 [2024-10-27 11:46:25.565624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.365 [2024-10-27 11:46:25.565636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.365 [2024-10-27 11:46:25.565652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.365 [2024-10-27 11:46:25.565661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:40.365 [2024-10-27 11:46:25.565668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.365 [2024-10-27 11:46:25.565676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.626 [2024-10-27 11:46:25.650752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.626 [2024-10-27 11:46:25.650805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:40.626 [2024-10-27 11:46:25.650818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.626 [2024-10-27 11:46:25.650834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.626 [2024-10-27 11:46:25.719171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.626 [2024-10-27 11:46:25.719235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:40.626 [2024-10-27 11:46:25.719247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.626 [2024-10-27 11:46:25.719263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.626 [2024-10-27 11:46:25.719374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.626 [2024-10-27 11:46:25.719389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:40.626 [2024-10-27 11:46:25.719398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.626 [2024-10-27 11:46:25.719407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.626 [2024-10-27 11:46:25.719447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.626 [2024-10-27 11:46:25.719457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:40.626 [2024-10-27 11:46:25.719465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.626 [2024-10-27 11:46:25.719473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.626 [2024-10-27 11:46:25.719558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.626 [2024-10-27 11:46:25.719569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:40.627 [2024-10-27 11:46:25.719577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.627 [2024-10-27 11:46:25.719585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.627 [2024-10-27 11:46:25.719612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.627 [2024-10-27 11:46:25.719625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:40.627 [2024-10-27 11:46:25.719635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.627 [2024-10-27 11:46:25.719643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.627 [2024-10-27 11:46:25.719682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.627 [2024-10-27 11:46:25.719693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:40.627 [2024-10-27 11:46:25.719702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.627 [2024-10-27 11:46:25.719710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.627 [2024-10-27 11:46:25.719757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.627 [2024-10-27 11:46:25.719769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:40.627 [2024-10-27 11:46:25.719779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.627 [2024-10-27 11:46:25.719788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.627 [2024-10-27 11:46:25.719915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 227.988 ms, result 0 00:31:42.542 00:31:42.542 00:31:42.542 11:46:27 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:42.542 [2024-10-27 11:46:27.564725] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:31:42.542 [2024-10-27 11:46:27.564866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83103 ] 00:31:42.542 [2024-10-27 11:46:27.730149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.804 [2024-10-27 11:46:27.853805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.092 [2024-10-27 11:46:28.141672] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:43.092 [2024-10-27 11:46:28.141757] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:43.092 [2024-10-27 11:46:28.303308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.092 [2024-10-27 11:46:28.303373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:43.092 [2024-10-27 11:46:28.303392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:43.092 [2024-10-27 11:46:28.303401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-10-27 11:46:28.303457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.092 [2024-10-27 11:46:28.303468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:43.092 [2024-10-27 11:46:28.303480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:43.092 [2024-10-27 11:46:28.303488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-10-27 11:46:28.303509] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:43.092 [2024-10-27 11:46:28.304197] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:43.092 [2024-10-27 11:46:28.304227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.092 [2024-10-27 11:46:28.304235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:43.092 [2024-10-27 11:46:28.304245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:31:43.092 [2024-10-27 11:46:28.304253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-10-27 11:46:28.305329] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:43.092 [2024-10-27 11:46:28.305396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.092 [2024-10-27 11:46:28.305407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:43.092 [2024-10-27 11:46:28.305424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:31:43.092 [2024-10-27 11:46:28.305432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-10-27 11:46:28.305551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.092 [2024-10-27 11:46:28.305563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:43.092 [2024-10-27 11:46:28.305572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:43.092 [2024-10-27 11:46:28.305581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-10-27 11:46:28.305883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.092 [2024-10-27 11:46:28.305894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:43.092 [2024-10-27 11:46:28.305907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:31:43.092 [2024-10-27 11:46:28.305915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-10-27 11:46:28.305990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.092 [2024-10-27 11:46:28.306002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:43.092 [2024-10-27 11:46:28.306011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:31:43.092 [2024-10-27 11:46:28.306019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-10-27 11:46:28.306042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.092 [2024-10-27 11:46:28.306052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:43.092 [2024-10-27 11:46:28.306061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:43.092 [2024-10-27 11:46:28.306071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-10-27 11:46:28.306094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:43.092 [2024-10-27 11:46:28.310389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.092 [2024-10-27 11:46:28.310436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:43.092 [2024-10-27 11:46:28.310446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:31:43.092 [2024-10-27 11:46:28.310454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-10-27 11:46:28.310490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.092 [2024-10-27 11:46:28.310500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:43.092 [2024-10-27 11:46:28.310508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:43.092 [2024-10-27 11:46:28.310516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.092 [2024-10-27 11:46:28.310577] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:43.092 [2024-10-27 11:46:28.310602] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:43.093 [2024-10-27 11:46:28.310642] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:43.093 [2024-10-27 11:46:28.310657] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:43.093 [2024-10-27 11:46:28.310762] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:43.093 [2024-10-27 11:46:28.310773] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:43.093 [2024-10-27 11:46:28.310784] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:43.093 [2024-10-27 11:46:28.310795] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:43.093 [2024-10-27 11:46:28.310804] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:43.093 [2024-10-27 11:46:28.310813] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:43.093 [2024-10-27 11:46:28.310820] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:43.093 [2024-10-27 11:46:28.310830] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:43.093 [2024-10-27 11:46:28.310838] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:43.093 [2024-10-27 11:46:28.310846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.093 [2024-10-27 11:46:28.310853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:43.093 [2024-10-27 11:46:28.310861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:31:43.093 [2024-10-27 11:46:28.310868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.093 [2024-10-27 11:46:28.310954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.093 [2024-10-27 11:46:28.310971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:43.093 [2024-10-27 11:46:28.310979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:43.093 [2024-10-27 11:46:28.310988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.093 [2024-10-27 11:46:28.311094] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:43.093 [2024-10-27 11:46:28.311112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:43.093 [2024-10-27 11:46:28.311122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:43.093 [2024-10-27 11:46:28.311130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:43.093 [2024-10-27 11:46:28.311145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:43.093 [2024-10-27 11:46:28.311160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:43.093 [2024-10-27 11:46:28.311166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:43.093 [2024-10-27 11:46:28.311180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:43.093 [2024-10-27 11:46:28.311189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:43.093 [2024-10-27 11:46:28.311197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:43.093 [2024-10-27 11:46:28.311204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:43.093 [2024-10-27 11:46:28.311212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:43.093 [2024-10-27 11:46:28.311219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:43.093 [2024-10-27 11:46:28.311240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:43.093 [2024-10-27 11:46:28.311247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:43.093 [2024-10-27 11:46:28.311260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.093 [2024-10-27 11:46:28.311273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:43.093 [2024-10-27 11:46:28.311280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.093 [2024-10-27 11:46:28.311307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:43.093 [2024-10-27 11:46:28.311315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.093 [2024-10-27 11:46:28.311329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:43.093 [2024-10-27 11:46:28.311337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.093 [2024-10-27 11:46:28.311351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:43.093 [2024-10-27 11:46:28.311358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:43.093 [2024-10-27 11:46:28.311373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:43.093 [2024-10-27 11:46:28.311380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:43.093 [2024-10-27 11:46:28.311387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:43.093 [2024-10-27 11:46:28.311394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:43.093 [2024-10-27 11:46:28.311401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:43.093 [2024-10-27 11:46:28.311408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:43.093 [2024-10-27 11:46:28.311422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:43.093 [2024-10-27 11:46:28.311428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311437] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:43.093 [2024-10-27 11:46:28.311446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:43.093 [2024-10-27 11:46:28.311455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:43.093 [2024-10-27 11:46:28.311463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.093 [2024-10-27 11:46:28.311470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:43.093 [2024-10-27 11:46:28.311478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:43.093 [2024-10-27 11:46:28.311485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:43.093 [2024-10-27 11:46:28.311492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:43.093 [2024-10-27 11:46:28.311499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:43.093 [2024-10-27 11:46:28.311506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:43.093 [2024-10-27 11:46:28.311515] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:43.093 [2024-10-27 11:46:28.311524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:43.093 [2024-10-27 11:46:28.311535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:43.093 [2024-10-27 11:46:28.311542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:43.093 [2024-10-27 11:46:28.311550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:43.093 [2024-10-27 11:46:28.311557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:43.093 [2024-10-27 11:46:28.311564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:43.093 [2024-10-27 11:46:28.311571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:43.093 [2024-10-27 11:46:28.311578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:43.093 [2024-10-27 11:46:28.311585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:43.093 [2024-10-27 11:46:28.311592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:43.093 [2024-10-27 11:46:28.311600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:43.093 [2024-10-27 11:46:28.311607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:43.093 [2024-10-27 11:46:28.311614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:43.093 [2024-10-27 11:46:28.311621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:43.093 [2024-10-27 11:46:28.311629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:43.093 [2024-10-27 11:46:28.311636] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:43.093 [2024-10-27 11:46:28.311644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:43.093 [2024-10-27 11:46:28.311652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:43.093 [2024-10-27 11:46:28.311660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:43.093 [2024-10-27 11:46:28.311668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:43.093 [2024-10-27 11:46:28.311675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:43.093 [2024-10-27 11:46:28.311685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.093 [2024-10-27 11:46:28.311693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:43.093 [2024-10-27 11:46:28.311701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:31:43.093 [2024-10-27 11:46:28.311709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.093 [2024-10-27 11:46:28.339406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.093 [2024-10-27 11:46:28.339456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:43.093 [2024-10-27 11:46:28.339468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.652 ms 00:31:43.093 [2024-10-27 11:46:28.339477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.093 [2024-10-27 11:46:28.339565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.093 [2024-10-27 11:46:28.339574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:43.094 [2024-10-27 11:46:28.339583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:43.094 [2024-10-27 11:46:28.339595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.386616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.386677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:43.384 [2024-10-27 11:46:28.386691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.964 ms 00:31:43.384 [2024-10-27 11:46:28.386701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.386747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.386761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:43.384 [2024-10-27 11:46:28.386770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:43.384 [2024-10-27 11:46:28.386779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.386893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.386905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:43.384 [2024-10-27 11:46:28.386914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:43.384 [2024-10-27 11:46:28.386922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.387051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.387061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:43.384 [2024-10-27 11:46:28.387072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:31:43.384 [2024-10-27 11:46:28.387082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.402776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.402830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:43.384 [2024-10-27 11:46:28.402841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.674 ms 00:31:43.384 [2024-10-27 11:46:28.402849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.403001] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:43.384 [2024-10-27 11:46:28.403016] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:43.384 [2024-10-27 11:46:28.403026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.403034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:43.384 [2024-10-27 11:46:28.403045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:43.384 [2024-10-27 11:46:28.403052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.415338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.415386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:43.384 [2024-10-27 11:46:28.415396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.266 ms 00:31:43.384 [2024-10-27 11:46:28.415404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.415527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.415536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:43.384 [2024-10-27 11:46:28.415546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:31:43.384 [2024-10-27 11:46:28.415554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.415611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.415621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:43.384 [2024-10-27 11:46:28.415629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:43.384 [2024-10-27 11:46:28.415637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.416227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.416255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:43.384 [2024-10-27 11:46:28.416264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:31:43.384 [2024-10-27 11:46:28.416272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.416288] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:43.384 [2024-10-27 11:46:28.416328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.416337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:43.384 [2024-10-27 11:46:28.416345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:43.384 [2024-10-27 11:46:28.416352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.428862] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:43.384 [2024-10-27 11:46:28.429030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.429041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:43.384 [2024-10-27 11:46:28.429051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.659 ms 00:31:43.384 [2024-10-27 11:46:28.429059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.431347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.431384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:43.384 [2024-10-27 11:46:28.431398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:31:43.384 [2024-10-27 11:46:28.431405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.431481] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:43.384 [2024-10-27 11:46:28.431937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.432006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:43.384 [2024-10-27 11:46:28.432016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:31:43.384 [2024-10-27 11:46:28.432023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.432049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.432064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:43.384 [2024-10-27 11:46:28.432072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:43.384 [2024-10-27 11:46:28.432080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.432113] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:43.384 [2024-10-27 11:46:28.432124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.432132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:43.384 [2024-10-27 11:46:28.432141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:43.384 [2024-10-27 11:46:28.432148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.459483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.459542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:43.384 [2024-10-27 11:46:28.459555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.315 ms 00:31:43.384 [2024-10-27 11:46:28.459563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.459652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.384 [2024-10-27 11:46:28.459662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:43.384 [2024-10-27 11:46:28.459671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:43.384 [2024-10-27 11:46:28.459680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.384 [2024-10-27 11:46:28.460933] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.185 ms, result 0 00:31:44.770  [2024-10-27T11:46:30.994Z] Copying: 12/1024 [MB] (12 MBps) [2024-10-27T11:46:31.939Z] Copying: 25/1024 [MB] (12 MBps) [2024-10-27T11:46:32.884Z] Copying: 43/1024 [MB] (18 MBps) [2024-10-27T11:46:33.836Z] Copying: 67/1024 [MB] (23 MBps) [2024-10-27T11:46:34.781Z] Copying: 89/1024 [MB] (22 MBps) [2024-10-27T11:46:35.726Z] Copying: 109/1024 [MB] (19 MBps) [2024-10-27T11:46:36.671Z] Copying: 128/1024 [MB] (19 MBps) [2024-10-27T11:46:38.060Z] Copying: 156/1024 [MB] (28 MBps) [2024-10-27T11:46:39.009Z] Copying: 175/1024 [MB] (19 MBps) [2024-10-27T11:46:39.954Z] Copying: 195/1024 [MB] (19 MBps) [2024-10-27T11:46:40.898Z] Copying: 217/1024 [MB] (22 MBps) [2024-10-27T11:46:41.842Z] Copying: 238/1024 [MB] (20 MBps) [2024-10-27T11:46:42.787Z] Copying: 259/1024 [MB] (21 MBps) [2024-10-27T11:46:43.731Z] Copying: 277/1024 [MB] (17 MBps) [2024-10-27T11:46:44.675Z] Copying: 297/1024 [MB] (20 MBps) [2024-10-27T11:46:46.062Z] Copying: 309/1024 [MB] (11 MBps) [2024-10-27T11:46:47.006Z] Copying: 325/1024 [MB] (15 MBps) [2024-10-27T11:46:47.950Z] Copying: 339/1024 [MB] (14 MBps) [2024-10-27T11:46:48.894Z] Copying: 357/1024 [MB] (17 MBps) [2024-10-27T11:46:49.843Z] Copying: 372/1024 [MB] (15 MBps) [2024-10-27T11:46:50.783Z] Copying: 391/1024 [MB] (18 MBps) [2024-10-27T11:46:51.728Z] Copying: 412/1024 [MB] (20 MBps) [2024-10-27T11:46:52.671Z] Copying: 425/1024 [MB] (12 MBps) [2024-10-27T11:46:54.058Z] Copying: 444/1024 [MB] (19 MBps) [2024-10-27T11:46:55.002Z] Copying: 465/1024 [MB] (20 MBps) [2024-10-27T11:46:55.947Z] Copying: 486/1024 [MB] (20 MBps) [2024-10-27T11:46:56.891Z] Copying: 507/1024 [MB] (21 MBps) [2024-10-27T11:46:57.834Z] Copying: 527/1024 [MB] (19 MBps) [2024-10-27T11:46:58.775Z] Copying: 545/1024 [MB] (17 MBps) [2024-10-27T11:46:59.719Z] Copying: 560/1024 [MB] (15 MBps) [2024-10-27T11:47:01.131Z] Copying: 579/1024 [MB] (19 MBps) [2024-10-27T11:47:01.747Z] Copying: 590/1024 [MB] (11 MBps) [2024-10-27T11:47:02.692Z] Copying: 608/1024 [MB] (17 MBps) [2024-10-27T11:47:04.077Z] Copying: 628/1024 [MB] (20 MBps) [2024-10-27T11:47:05.021Z] Copying: 646/1024 [MB] (17 MBps) [2024-10-27T11:47:05.963Z] Copying: 660/1024 [MB] (14 MBps) [2024-10-27T11:47:06.908Z] Copying: 672/1024 [MB] (11 MBps) [2024-10-27T11:47:07.853Z] Copying: 684/1024 [MB] (11 MBps) [2024-10-27T11:47:08.794Z] Copying: 695/1024 [MB] (11 MBps) [2024-10-27T11:47:09.735Z] Copying: 709/1024 [MB] (13 MBps) [2024-10-27T11:47:10.678Z] Copying: 721/1024 [MB] (12 MBps) [2024-10-27T11:47:12.063Z] Copying: 733/1024 [MB] (12 MBps) [2024-10-27T11:47:13.008Z] Copying: 747/1024 [MB] (14 MBps) [2024-10-27T11:47:13.952Z] Copying: 761/1024 [MB] (13 MBps) [2024-10-27T11:47:14.893Z] Copying: 776/1024 [MB] (15 MBps) [2024-10-27T11:47:15.832Z] Copying: 788/1024 [MB] (11 MBps) [2024-10-27T11:47:16.774Z] Copying: 805/1024 [MB] (16 MBps) [2024-10-27T11:47:17.715Z] Copying: 819/1024 [MB] (14 MBps) [2024-10-27T11:47:19.102Z] Copying: 833/1024 [MB] (14 MBps) [2024-10-27T11:47:19.675Z] Copying: 847/1024 [MB] (13 MBps) [2024-10-27T11:47:21.060Z] Copying: 862/1024 [MB] (15 MBps) [2024-10-27T11:47:22.006Z] Copying: 879/1024 [MB] (16 MBps) [2024-10-27T11:47:22.951Z] Copying: 890/1024 [MB] (10 MBps) [2024-10-27T11:47:23.895Z] Copying: 900/1024 [MB] (10 MBps) [2024-10-27T11:47:24.839Z] Copying: 923/1024 [MB] (22 MBps) [2024-10-27T11:47:25.783Z] Copying: 938/1024 [MB] (15 MBps) [2024-10-27T11:47:26.727Z] Copying: 955/1024 [MB] (16 MBps) [2024-10-27T11:47:27.672Z] Copying: 974/1024 [MB] (18 MBps) [2024-10-27T11:47:29.061Z] Copying: 988/1024 [MB] (13 MBps) [2024-10-27T11:47:30.004Z] Copying: 998/1024 [MB] (10 MBps) [2024-10-27T11:47:30.949Z] Copying: 1009/1024 [MB] (10 MBps) [2024-10-27T11:47:30.949Z] Copying: 1020/1024 [MB] (11 MBps) [2024-10-27T11:47:31.212Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-27 11:47:31.012022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.931 [2024-10-27 11:47:31.012118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:45.931 [2024-10-27 11:47:31.012138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:45.931 [2024-10-27 11:47:31.012149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.931 [2024-10-27 11:47:31.012181] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:45.931 [2024-10-27 11:47:31.016081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.931 [2024-10-27 11:47:31.016131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:45.931 [2024-10-27 11:47:31.016147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.879 ms 00:32:45.931 [2024-10-27 11:47:31.016159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.931 [2024-10-27 11:47:31.016529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.931 [2024-10-27 11:47:31.016575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:45.931 [2024-10-27 11:47:31.016590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:32:45.931 [2024-10-27 11:47:31.016602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.931 [2024-10-27 11:47:31.016640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.931 [2024-10-27 11:47:31.016654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:45.931 [2024-10-27 11:47:31.016666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:45.931 [2024-10-27 11:47:31.016681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.931 [2024-10-27 11:47:31.016754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.931 [2024-10-27 11:47:31.016779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:45.931 [2024-10-27 11:47:31.016795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:45.931 [2024-10-27 11:47:31.016806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.931 [2024-10-27 11:47:31.016827] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:45.931 [2024-10-27 11:47:31.016853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:45.931 [2024-10-27 11:47:31.016874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.016886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.016900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.016920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.016937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:45.931 [2024-10-27 11:47:31.017353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.017990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.018400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.019243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.019259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.019273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.019287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.019320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.019333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.019347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.019360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.019382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:45.932 [2024-10-27 11:47:31.019417] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:45.932 [2024-10-27 11:47:31.019432] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab 00:32:45.932 [2024-10-27 11:47:31.019446] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:45.932 [2024-10-27 11:47:31.019460] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3360 00:32:45.932 [2024-10-27 11:47:31.019472] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3328 00:32:45.932 [2024-10-27 11:47:31.019489] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:32:45.932 [2024-10-27 11:47:31.019507] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:45.932 [2024-10-27 11:47:31.019534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:45.932 [2024-10-27 11:47:31.019546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:45.932 [2024-10-27 11:47:31.019556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:45.932 [2024-10-27 11:47:31.019567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:45.933 [2024-10-27 11:47:31.019584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.933 [2024-10-27 11:47:31.019602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:45.933 [2024-10-27 11:47:31.019621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.758 ms 00:32:45.933 [2024-10-27 11:47:31.019632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.030158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.933 [2024-10-27 11:47:31.030190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:45.933 [2024-10-27 11:47:31.030199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.500 ms 00:32:45.933 [2024-10-27 11:47:31.030211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.030529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.933 [2024-10-27 11:47:31.030552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:45.933 [2024-10-27 11:47:31.030560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:32:45.933 [2024-10-27 11:47:31.030566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.056971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.057002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:45.933 [2024-10-27 11:47:31.057013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.057019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.057059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.057065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:45.933 [2024-10-27 11:47:31.057072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.057078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.057117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.057124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:45.933 [2024-10-27 11:47:31.057130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.057138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.057150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.057157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:45.933 [2024-10-27 11:47:31.057163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.057169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.115539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.115571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:45.933 [2024-10-27 11:47:31.115583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.115588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.163535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.163569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:45.933 [2024-10-27 11:47:31.163580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.163586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.163636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.163644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:45.933 [2024-10-27 11:47:31.163650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.163657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.163684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.163690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:45.933 [2024-10-27 11:47:31.163697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.163703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.163755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.163763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:45.933 [2024-10-27 11:47:31.163770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.163776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.163795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.163802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:45.933 [2024-10-27 11:47:31.163808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.163814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.163847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.163857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:45.933 [2024-10-27 11:47:31.163867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.163875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.163919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.933 [2024-10-27 11:47:31.163929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:45.933 [2024-10-27 11:47:31.163938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.933 [2024-10-27 11:47:31.163947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.933 [2024-10-27 11:47:31.164054] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 152.018 ms, result 0 00:32:46.552 00:32:46.552 00:32:46.552 11:47:31 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:48.560 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:48.560 11:47:33 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:48.560 11:47:33 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:48.560 11:47:33 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 81131 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 81131 ']' 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 81131 00:32:48.830 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (81131) - No such process 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 81131 is not found' 00:32:48.830 Process with pid 81131 is not found 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:48.830 Remove shared memory files 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_band_md /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_l2p_l1 /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_l2p_l2 /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_l2p_l2_ctx /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_nvc_md /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_p2l_pool /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_sb /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_sb_shm /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_trim_bitmap /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_trim_log /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_trim_md /dev/hugepages/ftl_ed6d0041-ec2e-4d5a-a4ee-f0592c65f3ab_vmap 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:48.830 00:32:48.830 real 4m22.414s 00:32:48.830 user 4m10.287s 00:32:48.830 sys 0m12.198s 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:48.830 ************************************ 00:32:48.830 END TEST ftl_restore_fast 00:32:48.830 ************************************ 00:32:48.830 11:47:33 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:48.830 11:47:33 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:48.830 11:47:33 ftl -- ftl/ftl.sh@14 -- # killprocess 72103 00:32:48.830 11:47:33 ftl -- common/autotest_common.sh@950 -- # '[' -z 72103 ']' 00:32:48.830 11:47:33 ftl -- common/autotest_common.sh@954 -- # kill -0 72103 00:32:48.830 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72103) - No such process 00:32:48.830 Process with pid 72103 is not found 00:32:48.830 11:47:33 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72103 is not found' 00:32:48.830 11:47:33 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:48.830 11:47:33 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83807 00:32:48.830 11:47:33 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83807 00:32:48.830 11:47:33 ftl -- common/autotest_common.sh@831 -- # '[' -z 83807 ']' 00:32:48.830 11:47:33 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.830 11:47:33 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:48.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.830 11:47:33 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.830 11:47:33 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:48.830 11:47:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:48.830 11:47:33 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:48.830 [2024-10-27 11:47:34.063987] Starting SPDK v25.01-pre git sha1 169c3cd04 / DPDK 24.03.0 initialization... 00:32:48.830 [2024-10-27 11:47:34.064079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83807 ] 00:32:49.091 [2024-10-27 11:47:34.214662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.091 [2024-10-27 11:47:34.292007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.663 11:47:34 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:49.663 11:47:34 ftl -- common/autotest_common.sh@864 -- # return 0 00:32:49.663 11:47:34 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:49.924 nvme0n1 00:32:49.924 11:47:35 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:49.924 11:47:35 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:49.924 11:47:35 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:50.185 11:47:35 ftl -- ftl/common.sh@28 -- # stores=2ddc0898-07da-45e3-9f28-717f998662e9 00:32:50.185 11:47:35 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:50.185 11:47:35 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ddc0898-07da-45e3-9f28-717f998662e9 00:32:50.446 11:47:35 ftl -- ftl/ftl.sh@23 -- # killprocess 83807 00:32:50.446 11:47:35 ftl -- common/autotest_common.sh@950 -- # '[' -z 83807 ']' 00:32:50.446 11:47:35 ftl -- common/autotest_common.sh@954 -- # kill -0 83807 00:32:50.446 11:47:35 ftl -- common/autotest_common.sh@955 -- # uname 00:32:50.446 11:47:35 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:50.446 11:47:35 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83807 00:32:50.446 killing process with pid 83807 00:32:50.446 11:47:35 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:50.446 11:47:35 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:50.446 11:47:35 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83807' 00:32:50.446 11:47:35 ftl -- common/autotest_common.sh@969 -- # kill 83807 00:32:50.446 11:47:35 ftl -- common/autotest_common.sh@974 -- # wait 83807 00:32:51.832 11:47:36 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:51.832 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:51.832 Waiting for block devices as requested 00:32:51.832 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:51.832 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:51.832 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:51.832 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:57.120 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:57.120 11:47:42 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:57.120 Remove shared memory files 00:32:57.120 11:47:42 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:57.120 11:47:42 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:57.120 11:47:42 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:57.120 11:47:42 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:57.120 11:47:42 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:57.120 11:47:42 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:57.120 ************************************ 00:32:57.120 END TEST ftl 00:32:57.120 ************************************ 00:32:57.120 00:32:57.120 real 17m40.508s 00:32:57.120 user 19m37.747s 00:32:57.120 sys 1m22.323s 00:32:57.120 11:47:42 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.120 11:47:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:57.120 11:47:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:57.120 11:47:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:57.120 11:47:42 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:57.120 11:47:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:57.120 11:47:42 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:57.120 11:47:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:57.120 11:47:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:57.120 11:47:42 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:57.120 11:47:42 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:57.120 11:47:42 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:57.120 11:47:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:57.120 11:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:57.120 11:47:42 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:57.120 11:47:42 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:57.120 11:47:42 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:57.120 11:47:42 -- common/autotest_common.sh@10 -- # set +x 00:32:58.508 INFO: APP EXITING 00:32:58.508 INFO: killing all VMs 00:32:58.508 INFO: killing vhost app 00:32:58.508 INFO: EXIT DONE 00:32:58.769 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:59.340 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:59.340 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:59.340 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:59.340 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:59.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:00.173 Cleaning 00:33:00.173 Removing: /var/run/dpdk/spdk0/config 00:33:00.173 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:00.173 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:00.173 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:00.173 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:00.173 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:00.173 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:00.173 Removing: /var/run/dpdk/spdk0 00:33:00.173 Removing: /var/run/dpdk/spdk_pid56878 00:33:00.173 Removing: /var/run/dpdk/spdk_pid57080 00:33:00.173 Removing: /var/run/dpdk/spdk_pid57287 00:33:00.173 Removing: /var/run/dpdk/spdk_pid57386 00:33:00.173 Removing: /var/run/dpdk/spdk_pid57420 00:33:00.173 Removing: /var/run/dpdk/spdk_pid57542 00:33:00.173 Removing: /var/run/dpdk/spdk_pid57555 00:33:00.173 Removing: /var/run/dpdk/spdk_pid57748 00:33:00.173 Removing: /var/run/dpdk/spdk_pid57841 00:33:00.173 Removing: /var/run/dpdk/spdk_pid57932 00:33:00.173 Removing: /var/run/dpdk/spdk_pid58037 00:33:00.173 Removing: /var/run/dpdk/spdk_pid58129 00:33:00.173 Removing: /var/run/dpdk/spdk_pid58163 00:33:00.173 Removing: /var/run/dpdk/spdk_pid58199 00:33:00.173 Removing: /var/run/dpdk/spdk_pid58270 00:33:00.173 Removing: /var/run/dpdk/spdk_pid58354 00:33:00.173 Removing: /var/run/dpdk/spdk_pid58779 00:33:00.173 Removing: /var/run/dpdk/spdk_pid58836 00:33:00.173 Removing: /var/run/dpdk/spdk_pid58895 00:33:00.173 Removing: /var/run/dpdk/spdk_pid58911 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59002 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59015 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59104 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59114 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59173 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59185 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59238 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59256 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59411 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59447 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59531 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59697 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59781 00:33:00.173 Removing: /var/run/dpdk/spdk_pid59818 00:33:00.173 Removing: /var/run/dpdk/spdk_pid60238 00:33:00.173 Removing: /var/run/dpdk/spdk_pid60336 00:33:00.173 Removing: /var/run/dpdk/spdk_pid60445 00:33:00.173 Removing: /var/run/dpdk/spdk_pid60498 00:33:00.173 Removing: /var/run/dpdk/spdk_pid60524 00:33:00.173 Removing: /var/run/dpdk/spdk_pid60602 00:33:00.173 Removing: /var/run/dpdk/spdk_pid61227 00:33:00.173 Removing: /var/run/dpdk/spdk_pid61260 00:33:00.173 Removing: /var/run/dpdk/spdk_pid61732 00:33:00.173 Removing: /var/run/dpdk/spdk_pid61830 00:33:00.173 Removing: /var/run/dpdk/spdk_pid61939 00:33:00.173 Removing: /var/run/dpdk/spdk_pid61992 00:33:00.173 Removing: /var/run/dpdk/spdk_pid62012 00:33:00.173 Removing: /var/run/dpdk/spdk_pid62043 00:33:00.173 Removing: /var/run/dpdk/spdk_pid63877 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64014 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64018 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64030 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64072 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64078 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64090 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64135 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64139 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64151 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64196 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64200 00:33:00.173 Removing: /var/run/dpdk/spdk_pid64212 00:33:00.173 Removing: /var/run/dpdk/spdk_pid65579 00:33:00.173 Removing: /var/run/dpdk/spdk_pid65682 00:33:00.173 Removing: /var/run/dpdk/spdk_pid67083 00:33:00.173 Removing: /var/run/dpdk/spdk_pid68489 00:33:00.173 Removing: /var/run/dpdk/spdk_pid68565 00:33:00.173 Removing: /var/run/dpdk/spdk_pid68641 00:33:00.173 Removing: /var/run/dpdk/spdk_pid68717 00:33:00.173 Removing: /var/run/dpdk/spdk_pid68821 00:33:00.173 Removing: /var/run/dpdk/spdk_pid68891 00:33:00.173 Removing: /var/run/dpdk/spdk_pid69033 00:33:00.173 Removing: /var/run/dpdk/spdk_pid69391 00:33:00.173 Removing: /var/run/dpdk/spdk_pid69423 00:33:00.173 Removing: /var/run/dpdk/spdk_pid69872 00:33:00.173 Removing: /var/run/dpdk/spdk_pid70050 00:33:00.173 Removing: /var/run/dpdk/spdk_pid70151 00:33:00.173 Removing: /var/run/dpdk/spdk_pid70262 00:33:00.173 Removing: /var/run/dpdk/spdk_pid70308 00:33:00.173 Removing: /var/run/dpdk/spdk_pid70335 00:33:00.173 Removing: /var/run/dpdk/spdk_pid70624 00:33:00.173 Removing: /var/run/dpdk/spdk_pid70684 00:33:00.173 Removing: /var/run/dpdk/spdk_pid70756 00:33:00.173 Removing: /var/run/dpdk/spdk_pid71144 00:33:00.173 Removing: /var/run/dpdk/spdk_pid71293 00:33:00.173 Removing: /var/run/dpdk/spdk_pid72103 00:33:00.173 Removing: /var/run/dpdk/spdk_pid72235 00:33:00.173 Removing: /var/run/dpdk/spdk_pid72400 00:33:00.173 Removing: /var/run/dpdk/spdk_pid72492 00:33:00.173 Removing: /var/run/dpdk/spdk_pid72789 00:33:00.173 Removing: /var/run/dpdk/spdk_pid73042 00:33:00.173 Removing: /var/run/dpdk/spdk_pid73404 00:33:00.173 Removing: /var/run/dpdk/spdk_pid73589 00:33:00.173 Removing: /var/run/dpdk/spdk_pid73781 00:33:00.173 Removing: /var/run/dpdk/spdk_pid73833 00:33:00.173 Removing: /var/run/dpdk/spdk_pid74031 00:33:00.173 Removing: /var/run/dpdk/spdk_pid74062 00:33:00.173 Removing: /var/run/dpdk/spdk_pid74120 00:33:00.434 Removing: /var/run/dpdk/spdk_pid74358 00:33:00.434 Removing: /var/run/dpdk/spdk_pid74583 00:33:00.434 Removing: /var/run/dpdk/spdk_pid75227 00:33:00.434 Removing: /var/run/dpdk/spdk_pid75969 00:33:00.434 Removing: /var/run/dpdk/spdk_pid76653 00:33:00.434 Removing: /var/run/dpdk/spdk_pid77454 00:33:00.434 Removing: /var/run/dpdk/spdk_pid77605 00:33:00.434 Removing: /var/run/dpdk/spdk_pid77692 00:33:00.434 Removing: /var/run/dpdk/spdk_pid78195 00:33:00.434 Removing: /var/run/dpdk/spdk_pid78251 00:33:00.434 Removing: /var/run/dpdk/spdk_pid78820 00:33:00.434 Removing: /var/run/dpdk/spdk_pid79257 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80084 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80207 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80251 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80308 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80359 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80429 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80613 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80707 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80774 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80833 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80872 00:33:00.434 Removing: /var/run/dpdk/spdk_pid80973 00:33:00.434 Removing: /var/run/dpdk/spdk_pid81131 00:33:00.434 Removing: /var/run/dpdk/spdk_pid81356 00:33:00.434 Removing: /var/run/dpdk/spdk_pid81863 00:33:00.434 Removing: /var/run/dpdk/spdk_pid82600 00:33:00.434 Removing: /var/run/dpdk/spdk_pid83103 00:33:00.434 Removing: /var/run/dpdk/spdk_pid83807 00:33:00.434 Clean 00:33:00.434 11:47:45 -- common/autotest_common.sh@1449 -- # return 0 00:33:00.434 11:47:45 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:00.434 11:47:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:00.434 11:47:45 -- common/autotest_common.sh@10 -- # set +x 00:33:00.434 11:47:45 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:00.434 11:47:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:00.434 11:47:45 -- common/autotest_common.sh@10 -- # set +x 00:33:00.434 11:47:45 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:00.434 11:47:45 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:00.434 11:47:45 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:00.434 11:47:45 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:00.434 11:47:45 -- spdk/autotest.sh@394 -- # hostname 00:33:00.434 11:47:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:00.695 geninfo: WARNING: invalid characters removed from testname! 00:33:27.286 11:48:10 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:28.673 11:48:13 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:31.219 11:48:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:32.603 11:48:17 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:34.519 11:48:19 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:36.441 11:48:21 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:38.356 11:48:23 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:38.356 11:48:23 -- common/autotest_common.sh@1688 -- $ [[ y == y ]] 00:33:38.356 11:48:23 -- common/autotest_common.sh@1689 -- $ lcov --version 00:33:38.356 11:48:23 -- common/autotest_common.sh@1689 -- $ awk '{print $NF}' 00:33:38.356 11:48:23 -- common/autotest_common.sh@1689 -- $ lt 1.15 2 00:33:38.356 11:48:23 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:33:38.356 11:48:23 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:33:38.356 11:48:23 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:33:38.356 11:48:23 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:38.356 11:48:23 -- scripts/common.sh@336 -- $ read -ra ver1 00:33:38.356 11:48:23 -- scripts/common.sh@337 -- $ IFS=.-: 00:33:38.356 11:48:23 -- scripts/common.sh@337 -- $ read -ra ver2 00:33:38.356 11:48:23 -- scripts/common.sh@338 -- $ local 'op=<' 00:33:38.356 11:48:23 -- scripts/common.sh@340 -- $ ver1_l=2 00:33:38.356 11:48:23 -- scripts/common.sh@341 -- $ ver2_l=1 00:33:38.356 11:48:23 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:33:38.356 11:48:23 -- scripts/common.sh@344 -- $ case "$op" in 00:33:38.356 11:48:23 -- scripts/common.sh@345 -- $ : 1 00:33:38.356 11:48:23 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:33:38.356 11:48:23 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.356 11:48:23 -- scripts/common.sh@365 -- $ decimal 1 00:33:38.356 11:48:23 -- scripts/common.sh@353 -- $ local d=1 00:33:38.356 11:48:23 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:38.356 11:48:23 -- scripts/common.sh@355 -- $ echo 1 00:33:38.356 11:48:23 -- scripts/common.sh@365 -- $ ver1[v]=1 00:33:38.356 11:48:23 -- scripts/common.sh@366 -- $ decimal 2 00:33:38.356 11:48:23 -- scripts/common.sh@353 -- $ local d=2 00:33:38.356 11:48:23 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:38.356 11:48:23 -- scripts/common.sh@355 -- $ echo 2 00:33:38.356 11:48:23 -- scripts/common.sh@366 -- $ ver2[v]=2 00:33:38.356 11:48:23 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:33:38.356 11:48:23 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:33:38.356 11:48:23 -- scripts/common.sh@368 -- $ return 0 00:33:38.356 11:48:23 -- common/autotest_common.sh@1690 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.356 11:48:23 -- common/autotest_common.sh@1702 -- $ export 'LCOV_OPTS= 00:33:38.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.356 --rc genhtml_branch_coverage=1 00:33:38.356 --rc genhtml_function_coverage=1 00:33:38.356 --rc genhtml_legend=1 00:33:38.356 --rc geninfo_all_blocks=1 00:33:38.356 --rc geninfo_unexecuted_blocks=1 00:33:38.356 00:33:38.356 ' 00:33:38.356 11:48:23 -- common/autotest_common.sh@1702 -- $ LCOV_OPTS=' 00:33:38.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.356 --rc genhtml_branch_coverage=1 00:33:38.356 --rc genhtml_function_coverage=1 00:33:38.356 --rc genhtml_legend=1 00:33:38.356 --rc geninfo_all_blocks=1 00:33:38.356 --rc geninfo_unexecuted_blocks=1 00:33:38.356 00:33:38.356 ' 00:33:38.356 11:48:23 -- common/autotest_common.sh@1703 -- $ export 'LCOV=lcov 00:33:38.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.356 --rc genhtml_branch_coverage=1 00:33:38.356 --rc genhtml_function_coverage=1 00:33:38.356 --rc genhtml_legend=1 00:33:38.356 --rc geninfo_all_blocks=1 00:33:38.356 --rc geninfo_unexecuted_blocks=1 00:33:38.356 00:33:38.356 ' 00:33:38.356 11:48:23 -- common/autotest_common.sh@1703 -- $ LCOV='lcov 00:33:38.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.356 --rc genhtml_branch_coverage=1 00:33:38.356 --rc genhtml_function_coverage=1 00:33:38.356 --rc genhtml_legend=1 00:33:38.356 --rc geninfo_all_blocks=1 00:33:38.356 --rc geninfo_unexecuted_blocks=1 00:33:38.356 00:33:38.356 ' 00:33:38.356 11:48:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:38.356 11:48:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:33:38.356 11:48:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:38.356 11:48:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.356 11:48:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.356 11:48:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.356 11:48:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.356 11:48:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.356 11:48:23 -- paths/export.sh@5 -- $ export PATH 00:33:38.356 11:48:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.356 11:48:23 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:38.356 11:48:23 -- common/autobuild_common.sh@486 -- $ date +%s 00:33:38.356 11:48:23 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730029703.XXXXXX 00:33:38.356 11:48:23 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730029703.6xR6QI 00:33:38.356 11:48:23 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:33:38.356 11:48:23 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:33:38.357 11:48:23 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:38.357 11:48:23 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:38.357 11:48:23 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:38.357 11:48:23 -- common/autobuild_common.sh@502 -- $ get_config_params 00:33:38.357 11:48:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:33:38.357 11:48:23 -- common/autotest_common.sh@10 -- $ set +x 00:33:38.357 11:48:23 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:33:38.357 11:48:23 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:33:38.357 11:48:23 -- pm/common@17 -- $ local monitor 00:33:38.357 11:48:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:38.357 11:48:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:38.357 11:48:23 -- pm/common@25 -- $ sleep 1 00:33:38.357 11:48:23 -- pm/common@21 -- $ date +%s 00:33:38.357 11:48:23 -- pm/common@21 -- $ date +%s 00:33:38.357 11:48:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1730029703 00:33:38.357 11:48:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1730029703 00:33:38.357 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1730029703_collect-cpu-load.pm.log 00:33:38.357 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1730029703_collect-vmstat.pm.log 00:33:39.300 11:48:24 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:33:39.300 11:48:24 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:33:39.300 11:48:24 -- spdk/autopackage.sh@14 -- $ timing_finish 00:33:39.300 11:48:24 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:39.300 11:48:24 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:39.300 11:48:24 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:39.300 11:48:24 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:39.300 11:48:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:39.300 11:48:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:39.300 11:48:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:39.300 11:48:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:39.300 11:48:24 -- pm/common@44 -- $ pid=85489 00:33:39.300 11:48:24 -- pm/common@50 -- $ kill -TERM 85489 00:33:39.300 11:48:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:39.300 11:48:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:39.300 11:48:24 -- pm/common@44 -- $ pid=85490 00:33:39.300 11:48:24 -- pm/common@50 -- $ kill -TERM 85490 00:33:39.300 + [[ -n 5039 ]] 00:33:39.300 + sudo kill 5039 00:33:39.311 [Pipeline] } 00:33:39.329 [Pipeline] // timeout 00:33:39.336 [Pipeline] } 00:33:39.352 [Pipeline] // stage 00:33:39.359 [Pipeline] } 00:33:39.375 [Pipeline] // catchError 00:33:39.385 [Pipeline] stage 00:33:39.388 [Pipeline] { (Stop VM) 00:33:39.402 [Pipeline] sh 00:33:39.691 + vagrant halt 00:33:42.228 ==> default: Halting domain... 00:33:47.581 [Pipeline] sh 00:33:47.866 + vagrant destroy -f 00:33:50.412 ==> default: Removing domain... 00:33:50.997 [Pipeline] sh 00:33:51.281 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:51.290 [Pipeline] } 00:33:51.303 [Pipeline] // stage 00:33:51.307 [Pipeline] } 00:33:51.319 [Pipeline] // dir 00:33:51.322 [Pipeline] } 00:33:51.335 [Pipeline] // wrap 00:33:51.341 [Pipeline] } 00:33:51.352 [Pipeline] // catchError 00:33:51.361 [Pipeline] stage 00:33:51.363 [Pipeline] { (Epilogue) 00:33:51.375 [Pipeline] sh 00:33:51.657 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:56.943 [Pipeline] catchError 00:33:56.946 [Pipeline] { 00:33:56.960 [Pipeline] sh 00:33:57.247 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:57.247 Artifacts sizes are good 00:33:57.260 [Pipeline] } 00:33:57.277 [Pipeline] // catchError 00:33:57.290 [Pipeline] archiveArtifacts 00:33:57.299 Archiving artifacts 00:33:57.402 [Pipeline] cleanWs 00:33:57.419 [WS-CLEANUP] Deleting project workspace... 00:33:57.419 [WS-CLEANUP] Deferred wipeout is used... 00:33:57.426 [WS-CLEANUP] done 00:33:57.428 [Pipeline] } 00:33:57.448 [Pipeline] // stage 00:33:57.453 [Pipeline] } 00:33:57.470 [Pipeline] // node 00:33:57.477 [Pipeline] End of Pipeline 00:33:57.517 Finished: SUCCESS